Test Report: Docker_Linux_crio 21550

                    
                      0aba0a8e31d541259ffdeb45c9650281430067b8:2025-09-17:41464
                    
                

Test fail (30/328)

Order failed test Duration
35 TestAddons/parallel/Registry 363.41
37 TestAddons/parallel/Ingress 492.43
41 TestAddons/parallel/CSI 373.39
44 TestAddons/parallel/LocalPath 345.69
46 TestAddons/parallel/Yakd 128.83
47 TestAddons/parallel/AmdGpuDevicePlugin 363.65
91 TestFunctional/parallel/DashboardCmd 302.53
98 TestFunctional/parallel/ServiceCmdConnect 603.22
100 TestFunctional/parallel/PersistentVolumeClaim 368.34
104 TestFunctional/parallel/MySQL 603.11
115 TestFunctional/parallel/ServiceCmd/DeployApp 600.65
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 240.68
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.45
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.58
154 TestFunctional/parallel/ServiceCmd/URL 0.57
165 TestMultiControlPlane/serial/AddWorkerNode 30.98
168 TestMultiControlPlane/serial/CopyFile 16.37
169 TestMultiControlPlane/serial/StopSecondaryNode 21.85
171 TestMultiControlPlane/serial/RestartSecondaryNode 48.31
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 461.06
174 TestMultiControlPlane/serial/DeleteSecondaryNode 13.42
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.73
177 TestMultiControlPlane/serial/RestartCluster 1053.16
252 TestKubernetesUpgrade 446.9
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.98
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 1.48
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 1.54
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.66
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.83
x
+
TestAddons/parallel/Registry (363.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.939202ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
helpers_test.go:337: TestAddons/parallel/Registry: WARNING: pod list for "kube-system" "actual-registry=true" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:384: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
addons_test.go:384: TestAddons/parallel/Registry: showing logs for failed pods as of 2025-09-17 00:02:47.615903044 +0000 UTC m=+875.083246612
addons_test.go:384: (dbg) Run:  kubectl --context addons-069011 describe po registry-66898fdd98-bl4r5 -n kube-system
addons_test.go:384: (dbg) kubectl --context addons-069011 describe po registry-66898fdd98-bl4r5 -n kube-system:
Name:             registry-66898fdd98-bl4r5
Namespace:        kube-system
Priority:         0
Service Account:  default
Node:             addons-069011/192.168.49.2
Start Time:       Tue, 16 Sep 2025 23:49:50 +0000
Labels:           actual-registry=true
addonmanager.kubernetes.io/mode=Reconcile
kubernetes.io/minikube-addons=registry
pod-template-hash=66898fdd98
Annotations:      <none>
Status:           Pending
IP:               10.244.0.19
IPs:
IP:           10.244.0.19
Controlled By:  ReplicaSet/registry-66898fdd98
Containers:
registry:
Container ID:   
Image:          docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d
Image ID:       
Port:           5000/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
REGISTRY_STORAGE_DELETE_ENABLED:  true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvnp8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vvnp8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  13m                 default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         12m                 default-scheduler  Successfully assigned kube-system/registry-66898fdd98-bl4r5 to addons-069011
Normal   Pulling           2m9s (x5 over 12m)  kubelet            Pulling image "docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d"
Warning  Failed            39s (x5 over 10m)   kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d": reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            39s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff           0s (x13 over 10m)   kubelet            Back-off pulling image "docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d"
Warning  Failed            0s (x13 over 10m)   kubelet            Error: ImagePullBackOff
addons_test.go:384: (dbg) Run:  kubectl --context addons-069011 logs registry-66898fdd98-bl4r5 -n kube-system
addons_test.go:384: (dbg) Non-zero exit: kubectl --context addons-069011 logs registry-66898fdd98-bl4r5 -n kube-system: exit status 1 (73.539157ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "registry" in pod "registry-66898fdd98-bl4r5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:384: kubectl --context addons-069011 logs registry-66898fdd98-bl4r5 -n kube-system: exit status 1
addons_test.go:385: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-069011
helpers_test.go:243: (dbg) docker inspect addons-069011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	        "Created": "2025-09-16T23:48:50.029636255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:48:50.075029861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hosts",
	        "LogPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1-json.log",
	        "Name": "/addons-069011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-069011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-069011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	                "LowerDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-069011",
	                "Source": "/var/lib/docker/volumes/addons-069011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-069011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-069011",
	                "name.minikube.sigs.k8s.io": "addons-069011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7ea0b62281ff8981f73b140342aff58601fbb663df7278dfdd6743a41abcca5",
	            "SandboxKey": "/var/run/docker/netns/f7ea0b62281f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-069011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:4c:3e:1e:87:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d62ec0fa3bfb3ffd62859a508f03996c549db14f34473599ddd1b9022067b7b9",
	                    "EndpointID": "f8f4fe858390c8f96bc24eec26736fad3a3b1ba30f09e93e016a6a79e947f7af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-069011",
	                        "678205c9d470"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-069011 -n addons-069011
helpers_test.go:252: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 logs -n 25: (1.435876857s)
helpers_test.go:260: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-997829 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ -o=json --download-only -p download-only-515641 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p download-docker-660125 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p download-docker-660125                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p binary-mirror-785971 --alsologtostderr --binary-mirror http://127.0.0.1:38515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p binary-mirror-785971                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ addons  │ enable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ start   │ -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:55 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ enable headlamp -p addons-069011 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                           │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:57 UTC │
	│ addons  │ addons-069011 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ addons  │ addons-069011 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:00 UTC │ 17 Sep 25 00:00 UTC │
	│ addons  │ addons-069011 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:27.723751  522590 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:27.723864  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.723869  522590 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:27.723873  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.724066  522590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0916 23:48:27.724618  522590 out.go:368] Setting JSON to false
	I0916 23:48:27.725494  522590 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9051,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:27.725585  522590 start.go:140] virtualization: kvm guest
	I0916 23:48:27.728073  522590 out.go:179] * [addons-069011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:27.729850  522590 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:48:27.729868  522590 notify.go:220] Checking for updates...
	I0916 23:48:27.733822  522590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:27.736141  522590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:27.738039  522590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:27.740423  522590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:48:27.743368  522590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:48:27.746574  522590 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:27.771724  522590 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:27.771874  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.829971  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.818365984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.830249  522590 docker.go:318] overlay module found
	I0916 23:48:27.832946  522590 out.go:179] * Using the docker driver based on user configuration
	I0916 23:48:27.834751  522590 start.go:304] selected driver: docker
	I0916 23:48:27.834826  522590 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:27.834849  522590 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:48:27.835571  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.897913  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.886229333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.898100  522590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:27.898315  522590 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:48:27.900183  522590 out.go:179] * Using Docker driver with root privileges
	I0916 23:48:27.901481  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:27.901597  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:27.901613  522590 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:48:27.901710  522590 start.go:348] cluster config:
	{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0916 23:48:27.903324  522590 out.go:179] * Starting "addons-069011" primary control-plane node in "addons-069011" cluster
	I0916 23:48:27.904623  522590 cache.go:123] Beginning downloading kic base image for docker with crio
	I0916 23:48:27.905841  522590 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:48:27.907270  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:27.907330  522590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:48:27.907328  522590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:27.907354  522590 cache.go:58] Caching tarball of preloaded images
	I0916 23:48:27.907495  522590 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 23:48:27.907513  522590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:48:27.907895  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:27.907924  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json: {Name:mk15dc7feab5fd17bb004b2e5f6ac3bc55ac0d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:27.925199  522590 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:27.925352  522590 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:48:27.925371  522590 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:48:27.925375  522590 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:48:27.925383  522590 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:48:27.925403  522590 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0916 23:48:40.932191  522590 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0916 23:48:40.932224  522590 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:48:40.932259  522590 start.go:360] acquireMachinesLock for addons-069011: {Name:mk9387b718f452cc25627a84d4c20b7f46084ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:48:40.932371  522590 start.go:364] duration metric: took 90.542µs to acquireMachinesLock for "addons-069011"
	I0916 23:48:40.932411  522590 start.go:93] Provisioning new machine with config: &{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:48:40.932527  522590 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:48:40.934531  522590 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0916 23:48:40.934774  522590 start.go:159] libmachine.API.Create for "addons-069011" (driver="docker")
	I0916 23:48:40.934810  522590 client.go:168] LocalClient.Create starting
	I0916 23:48:40.934920  522590 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0916 23:48:41.819608  522590 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0916 23:48:42.094971  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:48:42.113173  522590 cli_runner.go:211] docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:48:42.113240  522590 network_create.go:284] running [docker network inspect addons-069011] to gather additional debugging logs...
	I0916 23:48:42.113258  522590 cli_runner.go:164] Run: docker network inspect addons-069011
	W0916 23:48:42.130815  522590 cli_runner.go:211] docker network inspect addons-069011 returned with exit code 1
	I0916 23:48:42.130846  522590 network_create.go:287] error running [docker network inspect addons-069011]: docker network inspect addons-069011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-069011 not found
	I0916 23:48:42.130884  522590 network_create.go:289] output of [docker network inspect addons-069011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-069011 not found
	
	** /stderr **
	I0916 23:48:42.130990  522590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:42.149832  522590 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002180220}
	I0916 23:48:42.149931  522590 network_create.go:124] attempt to create docker network addons-069011 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:48:42.150036  522590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-069011 addons-069011
	I0916 23:48:42.212157  522590 network_create.go:108] docker network addons-069011 192.168.49.0/24 created
	I0916 23:48:42.212194  522590 kic.go:121] calculated static IP "192.168.49.2" for the "addons-069011" container
	I0916 23:48:42.212312  522590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:48:42.229867  522590 cli_runner.go:164] Run: docker volume create addons-069011 --label name.minikube.sigs.k8s.io=addons-069011 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:48:42.252846  522590 oci.go:103] Successfully created a docker volume addons-069011
	I0916 23:48:42.252968  522590 cli_runner.go:164] Run: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:48:45.649491  522590 cli_runner.go:217] Completed: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (3.39647838s)
	I0916 23:48:45.649523  522590 oci.go:107] Successfully prepared a docker volume addons-069011
	I0916 23:48:45.649558  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:45.649589  522590 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:48:45.649695  522590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:48:49.956300  522590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.306552681s)
	I0916 23:48:49.956343  522590 kic.go:203] duration metric: took 4.306749088s to extract preloaded images to volume ...
	W0916 23:48:49.956477  522590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:48:49.956523  522590 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:48:49.956572  522590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:48:50.013382  522590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-069011 --name addons-069011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-069011 --network addons-069011 --ip 192.168.49.2 --volume addons-069011:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:48:50.304600  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Running}}
	I0916 23:48:50.323420  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.342386  522590 cli_runner.go:164] Run: docker exec addons-069011 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:48:50.402276  522590 oci.go:144] the created container "addons-069011" has a running status.
	I0916 23:48:50.402326  522590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa...
	I0916 23:48:50.521235  522590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:48:50.553384  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.579068  522590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:48:50.579099  522590 kic_runner.go:114] Args: [docker exec --privileged addons-069011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:48:50.638566  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.659803  522590 machine.go:93] provisionDockerMachine start ...
	I0916 23:48:50.660411  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.680019  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.680310  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.680332  522590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:48:50.820950  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.820990  522590 ubuntu.go:182] provisioning hostname "addons-069011"
	I0916 23:48:50.821063  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.841195  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.841673  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.841710  522590 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-069011 && echo "addons-069011" | sudo tee /etc/hostname
	I0916 23:48:50.996855  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.996967  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.016407  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.016637  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.016655  522590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-069011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-069011/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-069011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:48:51.154270  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:48:51.154311  522590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0916 23:48:51.154380  522590 ubuntu.go:190] setting up certificates
	I0916 23:48:51.154420  522590 provision.go:84] configureAuth start
	I0916 23:48:51.154487  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:51.173820  522590 provision.go:143] copyHostCerts
	I0916 23:48:51.173904  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0916 23:48:51.174069  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0916 23:48:51.174140  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0916 23:48:51.174195  522590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.addons-069011 san=[127.0.0.1 192.168.49.2 addons-069011 localhost minikube]
	I0916 23:48:51.417777  522590 provision.go:177] copyRemoteCerts
	I0916 23:48:51.417839  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:48:51.417897  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.435902  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:51.535686  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 23:48:51.563321  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:48:51.590971  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:48:51.617420  522590 provision.go:87] duration metric: took 462.978002ms to configureAuth
	I0916 23:48:51.617461  522590 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:48:51.617668  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:48:51.617795  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.638144  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.638409  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.638436  522590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 23:48:51.891077  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 23:48:51.891114  522590 machine.go:96] duration metric: took 1.230812219s to provisionDockerMachine
	I0916 23:48:51.891125  522590 client.go:171] duration metric: took 10.956309615s to LocalClient.Create
	I0916 23:48:51.891146  522590 start.go:167] duration metric: took 10.956377105s to libmachine.API.Create "addons-069011"
	I0916 23:48:51.891155  522590 start.go:293] postStartSetup for "addons-069011" (driver="docker")
	I0916 23:48:51.891170  522590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:48:51.891245  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:48:51.891288  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.909900  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.010593  522590 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:48:52.014317  522590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:48:52.014357  522590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:48:52.014366  522590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:48:52.014375  522590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:48:52.014406  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0916 23:48:52.014479  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0916 23:48:52.014515  522590 start.go:296] duration metric: took 123.348567ms for postStartSetup
	I0916 23:48:52.014852  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.034024  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:52.034357  522590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:48:52.034430  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.053383  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.147697  522590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:48:52.152300  522590 start.go:128] duration metric: took 11.219755748s to createHost
	I0916 23:48:52.152322  522590 start.go:83] releasing machines lock for "addons-069011", held for 11.219940729s
	I0916 23:48:52.152383  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.170897  522590 ssh_runner.go:195] Run: cat /version.json
	I0916 23:48:52.170959  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.170960  522590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:48:52.171033  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.190054  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.190316  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.282770  522590 ssh_runner.go:195] Run: systemctl --version
	I0916 23:48:52.358127  522590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 23:48:52.500662  522590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:48:52.505640  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.530299  522590 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:48:52.530413  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.562277  522590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:48:52.562302  522590 start.go:495] detecting cgroup driver to use...
	I0916 23:48:52.562333  522590 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:48:52.562405  522590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:48:52.578904  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:48:52.592493  522590 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:48:52.592567  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:48:52.607812  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:48:52.623718  522590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:48:52.695401  522590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:48:52.772869  522590 docker.go:234] disabling docker service ...
	I0916 23:48:52.772931  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:48:52.793499  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:48:52.806446  522590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:48:52.880604  522590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:48:52.994666  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:48:53.008181  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:48:53.026581  522590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0916 23:48:53.026648  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.040463  522590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0916 23:48:53.040546  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.052415  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.063700  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.074445  522590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:48:53.085081  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.097098  522590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.114871  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.125827  522590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:48:53.135170  522590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:48:53.145546  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.253634  522590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 23:48:53.356442  522590 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 23:48:53.356540  522590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 23:48:53.360459  522590 start.go:563] Will wait 60s for crictl version
	I0916 23:48:53.360526  522590 ssh_runner.go:195] Run: which crictl
	I0916 23:48:53.364103  522590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:48:53.402094  522590 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 23:48:53.402233  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.441123  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.481919  522590 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0916 23:48:53.483462  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:53.502054  522590 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:48:53.506129  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.518646  522590 kubeadm.go:875] updating cluster {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:48:53.518762  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:53.518816  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.590933  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.590961  522590 crio.go:433] Images already preloaded, skipping extraction
	I0916 23:48:53.591020  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.627023  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.627057  522590 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:48:53.627066  522590 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0916 23:48:53.627155  522590 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-069011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:48:53.627228  522590 ssh_runner.go:195] Run: crio config
	I0916 23:48:53.674869  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:53.674893  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:53.674906  522590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:48:53.674926  522590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-069011 NodeName:addons-069011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:48:53.675093  522590 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-069011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:48:53.675157  522590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:48:53.685496  522590 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:48:53.685568  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 23:48:53.695890  522590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 23:48:53.715420  522590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:48:53.738183  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:48:53.758975  522590 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 23:48:53.763002  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.775153  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.837066  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:48:53.861100  522590 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011 for IP: 192.168.49.2
	I0916 23:48:53.861120  522590 certs.go:194] generating shared ca certs ...
	I0916 23:48:53.861145  522590 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:53.861308  522590 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0916 23:48:54.155814  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt ...
	I0916 23:48:54.155846  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt: {Name:mk009b1713fd08c38e8c6ac054b69276424ded29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156071  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key ...
	I0916 23:48:54.156093  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key: {Name:mk39b68875de7851b17692da85e287f48166d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156213  522590 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0916 23:48:54.291541  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt ...
	I0916 23:48:54.291579  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt: {Name:mk94baf5fb1a8134bb0c9a9f3d32b751fe0bf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291793  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key ...
	I0916 23:48:54.291817  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key: {Name:mk06b3e70f919971eec12f66023f6279f2a9059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291928  522590 certs.go:256] generating profile certs ...
	I0916 23:48:54.292014  522590 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key
	I0916 23:48:54.292060  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt with IP's: []
	I0916 23:48:54.529110  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt ...
	I0916 23:48:54.529147  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: {Name:mk9156e00306316f93255eae42ecd81bb5d60b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529374  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key ...
	I0916 23:48:54.529406  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key: {Name:mk15bd78effcf8815d5571a84284c31db31b997e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529525  522590 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd
	I0916 23:48:54.529556  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 23:48:54.601370  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd ...
	I0916 23:48:54.601415  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd: {Name:mkb42f86b810cddd05c27083cd910769800b1942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.602548  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd ...
	I0916 23:48:54.602578  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd: {Name:mkf41ec91a0589b4d908c830ee946e4604a6886c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.603343  522590 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt
	I0916 23:48:54.603493  522590 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key
	I0916 23:48:54.603577  522590 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key
	I0916 23:48:54.603602  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt with IP's: []
	I0916 23:48:54.685718  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt ...
	I0916 23:48:54.685751  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt: {Name:mk4c4f7fbd326f3d00c11caa86441b715a5844e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.686777  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key ...
	I0916 23:48:54.686809  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key: {Name:mkde64e1b9ef5bdc16ad6f2b11b391d65f689b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.687062  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:48:54.687107  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0916 23:48:54.687130  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:48:54.687161  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0916 23:48:54.687932  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:48:54.717259  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:48:54.744669  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:48:54.771438  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 23:48:54.799454  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 23:48:54.826220  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:48:54.853243  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:48:54.878912  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 23:48:54.905711  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:48:54.935757  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:48:54.956698  522590 ssh_runner.go:195] Run: openssl version
	I0916 23:48:54.962817  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:48:54.976805  522590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.980979  522590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.981051  522590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.988637  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:48:55.000379  522590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:48:55.004385  522590 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:48:55.004456  522590 kubeadm.go:392] StartCluster: {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:48:55.004547  522590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 23:48:55.004599  522590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:48:55.043443  522590 cri.go:89] found id: ""
	I0916 23:48:55.043525  522590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:48:55.053975  522590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:48:55.064119  522590 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:48:55.064186  522590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:48:55.074381  522590 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:48:55.074421  522590 kubeadm.go:157] found existing configuration files:
	
	I0916 23:48:55.074469  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:48:55.084667  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:48:55.084749  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:48:55.095859  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:48:55.106006  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:48:55.106068  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:48:55.115485  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.124880  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:48:55.124952  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.134292  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:48:55.144662  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:48:55.144725  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:48:55.154111  522590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:48:55.211692  522590 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:48:55.271378  522590 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:49:04.949743  522590 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:49:04.949820  522590 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:49:04.949928  522590 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:49:04.950016  522590 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:49:04.950100  522590 kubeadm.go:310] OS: Linux
	I0916 23:49:04.950168  522590 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:49:04.950250  522590 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:49:04.950311  522590 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:49:04.950355  522590 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:49:04.950436  522590 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:49:04.950511  522590 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:49:04.950590  522590 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:49:04.950659  522590 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:49:04.950779  522590 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:49:04.950896  522590 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:49:04.950988  522590 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:49:04.951039  522590 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:49:04.953148  522590 out.go:252]   - Generating certificates and keys ...
	I0916 23:49:04.953253  522590 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:49:04.953350  522590 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:49:04.953473  522590 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:49:04.953544  522590 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:49:04.953598  522590 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:49:04.953656  522590 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:49:04.953723  522590 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:49:04.953871  522590 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.953944  522590 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:49:04.954104  522590 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.954204  522590 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:49:04.954308  522590 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:49:04.954373  522590 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:49:04.954472  522590 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:49:04.954529  522590 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:49:04.954641  522590 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:49:04.954719  522590 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:49:04.954827  522590 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:49:04.954889  522590 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:49:04.954961  522590 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:49:04.955029  522590 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:49:04.956667  522590 out.go:252]   - Booting up control plane ...
	I0916 23:49:04.956807  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:49:04.956925  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:49:04.956985  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:49:04.957219  522590 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:49:04.957368  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:49:04.957516  522590 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:49:04.957633  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:49:04.957703  522590 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:49:04.957908  522590 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:49:04.958044  522590 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:49:04.958151  522590 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.203651ms
	I0916 23:49:04.958278  522590 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:49:04.958374  522590 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:49:04.958531  522590 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:49:04.958637  522590 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:49:04.958758  522590 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.870805967s
	I0916 23:49:04.958876  522590 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.059203573s
	I0916 23:49:04.958980  522590 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002212231s
	I0916 23:49:04.959143  522590 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:49:04.959322  522590 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:49:04.959464  522590 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:49:04.959729  522590 kubeadm.go:310] [mark-control-plane] Marking the node addons-069011 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:49:04.959828  522590 kubeadm.go:310] [bootstrap-token] Using token: hth27u.vwd374r3m591cy8w
	I0916 23:49:04.961508  522590 out.go:252]   - Configuring RBAC rules ...
	I0916 23:49:04.961663  522590 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:49:04.961761  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:49:04.961918  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:49:04.962103  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:49:04.962249  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:49:04.962324  522590 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:49:04.962449  522590 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:49:04.962510  522590 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:49:04.962584  522590 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:49:04.962595  522590 kubeadm.go:310] 
	I0916 23:49:04.962677  522590 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:49:04.962687  522590 kubeadm.go:310] 
	I0916 23:49:04.962800  522590 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:49:04.962816  522590 kubeadm.go:310] 
	I0916 23:49:04.962858  522590 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:49:04.962957  522590 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:49:04.963031  522590 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:49:04.963041  522590 kubeadm.go:310] 
	I0916 23:49:04.963139  522590 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:49:04.963150  522590 kubeadm.go:310] 
	I0916 23:49:04.963217  522590 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:49:04.963226  522590 kubeadm.go:310] 
	I0916 23:49:04.963305  522590 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:49:04.963432  522590 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:49:04.963527  522590 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:49:04.963541  522590 kubeadm.go:310] 
	I0916 23:49:04.963668  522590 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:49:04.963778  522590 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:49:04.963792  522590 kubeadm.go:310] 
	I0916 23:49:04.963908  522590 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964060  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0916 23:49:04.964108  522590 kubeadm.go:310] 	--control-plane 
	I0916 23:49:04.964118  522590 kubeadm.go:310] 
	I0916 23:49:04.964224  522590 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:49:04.964234  522590 kubeadm.go:310] 
	I0916 23:49:04.964354  522590 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964531  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0916 23:49:04.964546  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:49:04.964565  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:49:04.966440  522590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:49:04.968135  522590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:49:04.972876  522590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:49:04.972901  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:49:04.992864  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:49:05.238639  522590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:49:05.238825  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.238851  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-069011 minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-069011 minikube.k8s.io/primary=true
	I0916 23:49:05.248222  522590 ops.go:34] apiserver oom_adj: -16
	I0916 23:49:05.324340  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.825316  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.324537  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.824724  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.325050  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.824729  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.325083  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.824525  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.324551  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.825331  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.895926  522590 kubeadm.go:1105] duration metric: took 4.65716259s to wait for elevateKubeSystemPrivileges
	I0916 23:49:09.895964  522590 kubeadm.go:394] duration metric: took 14.891511977s to StartCluster
	I0916 23:49:09.895989  522590 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896108  522590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:49:09.896612  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896807  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:49:09.896820  522590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:49:09.896883  522590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 23:49:09.897046  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.897061  522590 addons.go:69] Setting volcano=true in profile "addons-069011"
	I0916 23:49:09.897068  522590 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897082  522590 addons.go:238] Setting addon volcano=true in "addons-069011"
	I0916 23:49:09.897052  522590 addons.go:69] Setting yakd=true in profile "addons-069011"
	I0916 23:49:09.897090  522590 addons.go:69] Setting registry-creds=true in profile "addons-069011"
	I0916 23:49:09.897102  522590 addons.go:238] Setting addon yakd=true in "addons-069011"
	I0916 23:49:09.897112  522590 addons.go:238] Setting addon registry-creds=true in "addons-069011"
	I0916 23:49:09.897122  522590 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-069011"
	I0916 23:49:09.897128  522590 addons.go:69] Setting storage-provisioner=true in profile "addons-069011"
	I0916 23:49:09.897146  522590 addons.go:69] Setting volumesnapshots=true in profile "addons-069011"
	I0916 23:49:09.897161  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897169  522590 addons.go:69] Setting metrics-server=true in profile "addons-069011"
	I0916 23:49:09.897176  522590 addons.go:69] Setting cloud-spanner=true in profile "addons-069011"
	I0916 23:49:09.897178  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897047  522590 addons.go:69] Setting inspektor-gadget=true in profile "addons-069011"
	I0916 23:49:09.897165  522590 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897206  522590 addons.go:238] Setting addon cloud-spanner=true in "addons-069011"
	I0916 23:49:09.897216  522590 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-069011"
	I0916 23:49:09.897232  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897233  522590 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-069011"
	I0916 23:49:09.897264  522590 addons.go:238] Setting addon inspektor-gadget=true in "addons-069011"
	I0916 23:49:09.897181  522590 addons.go:238] Setting addon metrics-server=true in "addons-069011"
	I0916 23:49:09.897423  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897445  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897164  522590 addons.go:238] Setting addon volumesnapshots=true in "addons-069011"
	I0916 23:49:09.897586  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897092  522590 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-069011"
	I0916 23:49:09.897619  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-069011"
	I0916 23:49:09.897820  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897823  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897828  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897883  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897925  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897931  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.898010  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897153  522590 addons.go:238] Setting addon storage-provisioner=true in "addons-069011"
	I0916 23:49:09.898348  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897270  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897123  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.898989  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.899031  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897162  522590 addons.go:69] Setting registry=true in profile "addons-069011"
	I0916 23:49:09.899114  522590 addons.go:238] Setting addon registry=true in "addons-069011"
	I0916 23:49:09.899147  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897135  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897171  522590 addons.go:69] Setting default-storageclass=true in profile "addons-069011"
	I0916 23:49:09.899508  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-069011"
	I0916 23:49:09.897278  522590 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:09.899697  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897286  522590 addons.go:69] Setting ingress=true in profile "addons-069011"
	I0916 23:49:09.899882  522590 addons.go:238] Setting addon ingress=true in "addons-069011"
	I0916 23:49:09.899918  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897295  522590 addons.go:69] Setting gcp-auth=true in profile "addons-069011"
	I0916 23:49:09.899976  522590 mustload.go:65] Loading cluster: addons-069011
	I0916 23:49:09.897305  522590 addons.go:69] Setting ingress-dns=true in profile "addons-069011"
	I0916 23:49:09.900142  522590 addons.go:238] Setting addon ingress-dns=true in "addons-069011"
	I0916 23:49:09.900176  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.900346  522590 out.go:179] * Verifying Kubernetes components...
	I0916 23:49:09.902141  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:49:09.906029  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906489  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906586  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906921  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.907068  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909270  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909876  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.910613  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906032  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.966036  522590 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-069011"
	I0916 23:49:09.966110  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.966784  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	W0916 23:49:09.981981  522590 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 23:49:09.986930  522590 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0916 23:49:09.989771  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 23:49:09.989801  522590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 23:49:09.989878  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.990151  522590 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0916 23:49:09.991871  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 23:49:09.992484  522590 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0916 23:49:09.993934  522590 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:09.993954  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 23:49:09.994025  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.994418  522590 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:09.994431  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0916 23:49:09.994485  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.997452  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 23:49:09.997452  522590 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0916 23:49:10.001152  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 23:49:10.001192  522590 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.001229  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0916 23:49:10.001311  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.003359  522590 addons.go:238] Setting addon default-storageclass=true in "addons-069011"
	I0916 23:49:10.003429  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.003879  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:10.004609  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 23:49:10.006166  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 23:49:10.007322  522590 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0916 23:49:10.008643  522590 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 23:49:10.008663  522590 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.008684  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 23:49:10.008820  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 23:49:10.008829  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.010190  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 23:49:10.010220  522590 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 23:49:10.010294  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.012486  522590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:49:10.012564  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 23:49:10.014826  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.014910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:49:10.015167  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.016771  522590 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0916 23:49:10.018372  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.018418  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0916 23:49:10.018493  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 23:49:10.018494  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.019739  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 23:49:10.019764  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 23:49:10.019840  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.023104  522590 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0916 23:49:10.023240  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0916 23:49:10.024340  522590 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 23:49:10.024365  522590 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0916 23:49:10.024441  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.025784  522590 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0916 23:49:10.025900  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.027422  522590 out.go:179]   - Using image docker.io/registry:3.0.0
	I0916 23:49:10.029503  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.032360  522590 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 23:49:10.032382  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 23:49:10.032451  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.032643  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.037094  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.038113  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.038152  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 23:49:10.038221  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.058927  522590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.058950  522590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:49:10.059009  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.063705  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 23:49:10.066747  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 23:49:10.066781  522590 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 23:49:10.066937  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.067231  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.069660  522590 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 23:49:10.072852  522590 out.go:179]   - Using image docker.io/busybox:stable
	I0916 23:49:10.077706  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.077738  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 23:49:10.077812  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.081171  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099594  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099601  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.101679  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.103303  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.109277  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.113014  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:49:10.114406  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.114692  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:49:10.116962  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.132677  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.135654  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.137795  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.144377  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.149192  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.245816  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 23:49:10.245838  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 23:49:10.253803  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:10.256108  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.265944  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:10.288794  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 23:49:10.288827  522590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 23:49:10.291276  522590 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.291301  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0916 23:49:10.298027  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.301761  522590 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 23:49:10.301815  522590 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 23:49:10.303881  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 23:49:10.303906  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 23:49:10.307619  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.321011  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.321513  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 23:49:10.321533  522590 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 23:49:10.335228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.342628  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.353105  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.360830  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 23:49:10.360864  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 23:49:10.366097  522590 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.366124  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 23:49:10.368966  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.368997  522590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 23:49:10.374870  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 23:49:10.374897  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 23:49:10.383228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.419473  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 23:49:10.419505  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 23:49:10.420148  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 23:49:10.420173  522590 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 23:49:10.431466  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 23:49:10.431495  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 23:49:10.431508  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.447520  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.491601  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 23:49:10.491635  522590 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 23:49:10.495666  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 23:49:10.495699  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 23:49:10.522266  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 23:49:10.522304  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 23:49:10.608119  522590 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:49:10.610081  522590 node_ready.go:35] waiting up to 6m0s for node "addons-069011" to be "Ready" ...
	I0916 23:49:10.613978  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.614095  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 23:49:10.619888  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 23:49:10.619918  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 23:49:10.636272  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 23:49:10.636303  522590 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 23:49:10.689230  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.705272  522590 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.705297  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 23:49:10.708368  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 23:49:10.708557  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 23:49:10.788275  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 23:49:10.788306  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 23:49:10.806501  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.869607  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 23:49:10.869632  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 23:49:10.937889  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 23:49:10.937914  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 23:49:11.002071  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.002102  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 23:49:11.047895  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.130142  522590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-069011" context rescaled to 1 replicas
	I0916 23:49:11.643350  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.290178117s)
	I0916 23:49:11.643439  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.30078278s)
	I0916 23:49:11.643452  522590 addons.go:479] Verifying addon ingress=true in "addons-069011"
	I0916 23:49:11.643582  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.212051777s)
	I0916 23:49:11.643613  522590 addons.go:479] Verifying addon registry=true in "addons-069011"
	I0916 23:49:11.643522  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.260251451s)
	I0916 23:49:11.643722  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.196160875s)
	W0916 23:49:11.643735  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.643740  522590 addons.go:479] Verifying addon metrics-server=true in "addons-069011"
	I0916 23:49:11.643761  522590 retry.go:31] will retry after 298.602868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.646501  522590 out.go:179] * Verifying registry addon...
	I0916 23:49:11.646501  522590 out.go:179] * Verifying ingress addon...
	I0916 23:49:11.646504  522590 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-069011 service yakd-dashboard -n yakd-dashboard
	
	I0916 23:49:11.652191  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 23:49:11.652206  522590 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 23:49:11.655147  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:11.655173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:11.655271  522590 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 23:49:11.655299  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:11.943533  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:12.143203  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.336408881s)
	W0916 23:49:12.143280  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143297  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095362374s)
	I0916 23:49:12.143318  522590 retry.go:31] will retry after 271.042655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143322  522590 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:12.145833  522590 out.go:179] * Verifying csi-hostpath-driver addon...
	I0916 23:49:12.148236  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 23:49:12.153014  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:12.153041  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.157053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.157321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.415287  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0916 23:49:12.575627  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:12.575662  522590 retry.go:31] will retry after 298.950278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:12.614105  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:12.652906  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.655120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.655721  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.875699  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:13.152262  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.155946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.156155  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:13.653200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.655268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.655558  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.152741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.154674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.154869  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.651414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.654802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.929904  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51454475s)
	I0916 23:49:14.929925  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.05417803s)
	W0916 23:49:14.929968  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:14.929993  522590 retry.go:31] will retry after 724.402782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:15.113335  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:15.152058  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.155353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.155409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:15.651139  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.655103  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:15.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.152053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.155268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.155481  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:16.233482  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.233517  522590 retry.go:31] will retry after 528.645422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.654976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.763126  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:17.152861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.155374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:17.346237  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:17.346292  522590 retry.go:31] will retry after 1.241721728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:17.613291  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:17.637138  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 23:49:17.637240  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.651912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.655594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:17.659459  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.770859  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 23:49:17.790444  522590 addons.go:238] Setting addon gcp-auth=true in "addons-069011"
	I0916 23:49:17.790517  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:17.790880  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:17.810255  522590 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 23:49:17.810334  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.829504  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.924366  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:17.925772  522590 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0916 23:49:17.926989  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 23:49:17.927016  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 23:49:17.947928  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 23:49:17.947963  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 23:49:17.968887  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:17.968910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 23:49:17.988471  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:18.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.155501  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.155799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.360333  522590 addons.go:479] Verifying addon gcp-auth=true in "addons-069011"
	I0916 23:49:18.361695  522590 out.go:179] * Verifying gcp-auth addon...
	I0916 23:49:18.364169  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 23:49:18.367024  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 23:49:18.367044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:18.588324  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:18.652355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.654775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.655329  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.867741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:19.151755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.154903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.154930  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:19.161345  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.161383  522590 retry.go:31] will retry after 2.165570319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:19.614026  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:19.652152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.655765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.655827  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:19.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.151387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.154666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.154897  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.368600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.651411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.655000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.655011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.868027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.151730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.155244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.155464  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.327698  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:21.367411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.650905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.655659  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.655769  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.867968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:21.902069  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:21.902100  522590 retry.go:31] will retry after 1.920767743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:22.113269  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:22.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.154840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:22.651563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.655020  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.868412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.151599  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.155033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.155245  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.367616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.651422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.654714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.654854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.823078  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:23.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.113772  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:24.152012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.155306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.155536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.367843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.396574  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.396608  522590 retry.go:31] will retry after 5.249600328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.651892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.655528  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.868048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.152228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.155056  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.368598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.651661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.655231  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.655269  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.867507  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:26.151287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.155745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.155923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.368083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:26.612894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:26.652086  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.655500  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.867894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.151727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.155077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.368077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.652080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.655544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:28.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.155039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.155194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.367271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:28.613247  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:28.652605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.654553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.868444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.151120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.155325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.155404  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.367903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.646635  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:29.651947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.655369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.655591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.868090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.151994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:30.222879  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.222909  522590 retry.go:31] will retry after 6.679975361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.368039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.655141  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.655354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:30.867036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:31.112894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:31.151818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.155291  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.367578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:31.651196  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.655723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.655764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.867818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.152173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.155965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.156115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.367078  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.652733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.655287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.655347  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.867604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:33.113866  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:33.151850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.155462  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.367548  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:33.651173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.655487  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.655550  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.151692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.154752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.154822  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.367980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.652127  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.655730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.868271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:35.151839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.155765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.155925  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.368376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:35.613366  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:35.651791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.655929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.656002  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.868276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.152007  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.155379  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.367593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.652140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.655627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.655826  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.903759  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:37.152322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.155245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.367621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:37.484516  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:37.484552  522590 retry.go:31] will retry after 4.853736845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:37.613755  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:37.651588  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.654987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.655126  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.867377  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.154847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.155074  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.651724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.655174  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.867641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:39.151291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:39.613957  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:39.652056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.655324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.655427  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.155213  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.155515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.367629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.652268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.655504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.655716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.867786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.151908  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.155026  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.155219  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.367009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.652274  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.654993  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.868497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.113784  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:42.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.156178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.156253  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.339312  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:42.368085  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:42.653863  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.656534  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.656609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.867016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.931965  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:42.932013  522590 retry.go:31] will retry after 9.201032876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:43.151738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.155452  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.157165  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.367931  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:43.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.655792  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.655791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.868283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:44.151192  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.155952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.156077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.368187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:44.612897  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:44.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.655165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.867416  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.155365  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.155527  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.367088  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.652905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.655224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.655382  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.867470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:46.152562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.155553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.155698  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.367899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:46.613967  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:46.652183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.867721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.155062  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.155242  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.367292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.652156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.655812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.656147  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.867423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.152152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.155526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.155678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.367871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.651966  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.655104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.655456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:49.113864  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:49.151422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.154601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.154659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.368059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:49.651895  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.655081  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.655227  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.867193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.154433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.154532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.367752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.614048  522590 node_ready.go:49] node "addons-069011" is "Ready"
	I0916 23:49:50.614124  522590 node_ready.go:38] duration metric: took 40.004018622s for node "addons-069011" to be "Ready" ...
	I0916 23:49:50.614142  522590 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:49:50.614260  522590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:49:50.634002  522590 api_server.go:72] duration metric: took 40.737149121s to wait for apiserver process to appear ...
	I0916 23:49:50.634037  522590 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:49:50.634066  522590 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:49:50.639530  522590 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:49:50.640709  522590 api_server.go:141] control plane version: v1.34.0
	I0916 23:49:50.640743  522590 api_server.go:131] duration metric: took 6.69752ms to wait for apiserver health ...
	I0916 23:49:50.640754  522590 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:49:50.645035  522590 system_pods.go:59] 20 kube-system pods found
	I0916 23:49:50.645109  522590 system_pods.go:61] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.645119  522590 system_pods.go:61] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.645126  522590 system_pods.go:61] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.645131  522590 system_pods.go:61] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.645134  522590 system_pods.go:61] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.645138  522590 system_pods.go:61] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.645141  522590 system_pods.go:61] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.645146  522590 system_pods.go:61] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.645150  522590 system_pods.go:61] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.645156  522590 system_pods.go:61] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.645165  522590 system_pods.go:61] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.645171  522590 system_pods.go:61] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.645182  522590 system_pods.go:61] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.645192  522590 system_pods.go:61] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.645206  522590 system_pods.go:61] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.645211  522590 system_pods.go:61] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.645217  522590 system_pods.go:61] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.645222  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645231  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645238  522590 system_pods.go:61] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.645253  522590 system_pods.go:74] duration metric: took 4.491675ms to wait for pod list to return data ...
	I0916 23:49:50.645267  522590 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:49:50.649832  522590 default_sa.go:45] found service account: "default"
	I0916 23:49:50.649863  522590 default_sa.go:55] duration metric: took 4.587184ms for default service account to be created ...
	I0916 23:49:50.649876  522590 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:49:50.651240  522590 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:50.651263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.653416  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.653453  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.653463  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.653471  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.653478  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.653507  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.653517  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.653523  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.653531  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.653541  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.653553  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.653564  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.653570  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.653577  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.653586  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.653604  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.653610  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.653621  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.653630  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653641  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653649  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.653671  522590 retry.go:31] will retry after 286.454663ms: missing components: kube-dns
	I0916 23:49:50.654669  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:50.654689  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.655263  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.970963  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.971008  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.971021  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.971032  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:50.971040  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:50.971049  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:50.971060  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.971067  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.971075  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.971081  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.971093  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.971098  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.971107  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.971115  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.971127  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.971139  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.971149  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:50.971487  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.971519  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971529  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971537  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.971560  522590 retry.go:31] will retry after 250.710433ms: missing components: kube-dns
	I0916 23:49:51.152661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.154830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.154922  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.227146  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.227184  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.227191  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:51.227200  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.227206  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.227213  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.227219  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.227223  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.227226  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.227230  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.227235  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.227241  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.227244  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.227250  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.227256  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.227261  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.227265  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.227272  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.227277  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227286  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227292  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:51.227310  522590 retry.go:31] will retry after 293.334556ms: missing components: kube-dns
	I0916 23:49:51.368304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:51.526481  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.526535  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.526545  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Running
	I0916 23:49:51.526559  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.526572  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.526582  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.526589  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.526595  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.526601  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.526608  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.526618  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.526623  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.526629  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.526635  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.526645  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.526690  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.526699  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.526714  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.526722  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526731  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526737  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Running
	I0916 23:49:51.526755  522590 system_pods.go:126] duration metric: took 876.872082ms to wait for k8s-apps to be running ...
	I0916 23:49:51.526767  522590 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:49:51.526834  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:49:51.543571  522590 system_svc.go:56] duration metric: took 16.790922ms WaitForService to wait for kubelet
	I0916 23:49:51.543604  522590 kubeadm.go:578] duration metric: took 41.646760707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:49:51.543633  522590 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:49:51.546804  522590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:49:51.546832  522590 node_conditions.go:123] node cpu capacity is 8
	I0916 23:49:51.546851  522590 node_conditions.go:105] duration metric: took 3.210939ms to run NodePressure ...
	I0916 23:49:51.546866  522590 start.go:241] waiting for startup goroutines ...
	I0916 23:49:51.653201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.655460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.655502  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.867905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.133215  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:52.152421  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.155318  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:52.367901  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.651612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:52.780604  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.780644  522590 retry.go:31] will retry after 11.236841486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.867960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.152499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.155690  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.369120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.653294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.655366  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.867612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.152263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.154786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.154825  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.368535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.651809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.655532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.655654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.868318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.152216  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.154997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.155198  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.368885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.652607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.868072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.153735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.155961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.156369  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.367288  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.654554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.654654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.867827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.152232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.154799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.154814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.368344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.651690  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.655166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.655327  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.867912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.152149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.155593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.155720  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.367868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.654817  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.867989  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.154848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.154899  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.368414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.651849  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.655048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.655193  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.866961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.152429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.154913  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.154932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.367821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.652008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.655477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.867460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.152318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.155248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.155323  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.651746  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.655519  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.655601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.867766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.152212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.154600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.154831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.367336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.655315  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.655331  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.154749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.154818  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.651319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.655739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.655966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.868159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:04.018435  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:04.151970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.155986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.156204  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.367594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:04.598781  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.598815  522590 retry.go:31] will retry after 23.829016694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.655382  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.867585  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.151943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.367838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.652819  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.868265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.151902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.155278  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.367335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.651933  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.655376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.867544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.151927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.155463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.155566  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.367946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.652554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.655150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.655250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.867104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.154867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.154932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.367820  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.652108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.655674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.867488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.151318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.155660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.155771  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.368018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.652352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.867979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.154744  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.367888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.652342  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.868023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.152284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.154741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.154823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.368224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.651602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.654730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.655430  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.152453  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.155233  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.367898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.652236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.654831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.654839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.868375  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.151282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.155678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.155786  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.368346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.652132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.655641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.655658  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.867735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.152048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.155624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.367645  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.651952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.655433  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.867300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.151804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.155275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.155321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.367103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.651754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.655590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.655740  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.868629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.155556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.155585  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.367279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.651583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.655042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.867499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.151753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.154889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.368258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.651448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.655920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.655988  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.868165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.155157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.368301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.654851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.655022  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.868093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.154885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.154951  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.368636  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.651987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.655509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.655549  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.867433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.154985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.155048  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.368109  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.651638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.654894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.654923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.867870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.155357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.155505  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.368035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.652897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.656101  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.656100  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.152943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.155198  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.367576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.655870  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.867990  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.152723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.155609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.155624  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.653531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.655283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.867298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.151888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.155832  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.155956  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.373346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.652179  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.655942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.656079  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.867787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.152745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.156266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.156485  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.367952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.655819  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.867860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.153299  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.155510  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.155645  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.367671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.655448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.655652  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.867254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.151981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.156009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.156850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.367744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.654351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.656634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.656737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.868098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.153435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.156745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.156944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.367835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.428940  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:28.651949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.655492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.655714  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.866833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:29.128531  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.128569  522590 retry.go:31] will retry after 40.39789771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.154066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.156666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.156872  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.367799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:29.652238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.654780  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.655095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.867922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.152458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.155006  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.155093  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.367812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.652850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.867340  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.151917  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.155386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.155417  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.367531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.653268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.657791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.657831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.868270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.155469  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.157902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.158614  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.368334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.656126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.656171  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.155033  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.156187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.366965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.655162  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.655350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.868673  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.152675  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.155008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.155063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.368239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.655185  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.867899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.152626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.155359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.155446  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.367305  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.652378  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.655807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.868004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.152291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.155228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.155274  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.367904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.652666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.655056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.868245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.153660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.155936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.156021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.367947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.652965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.654916  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.654970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.867352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.152079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.155581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.155593  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.652943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.655717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.868640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.152316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.155082  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.155138  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.368233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.651993  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.654885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.655026  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.868217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.152059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.155525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.155590  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.367907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.152251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.154763  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.367545  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.654751  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.868012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.154862  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.154889  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.368681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.652243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.654497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.654707  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.152560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.156124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.156157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.367649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.652430  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.654968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.654986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.867477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.151715  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.154833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.154926  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.368003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.652097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.655411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.655482  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.151785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.155294  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.367710  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.652316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.654798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.654835  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.867771  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.151940  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.155638  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.367470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.652017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.655632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.655678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.152166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.155566  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.155778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.653210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.655490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.655647  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.867856  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.152084  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.155486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.155488  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.367425  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.654912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.151097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.155642  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.155716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.652527  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.654528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.654540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.867508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.152341  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.367631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.651795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.654967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.655191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.867951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.155228  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.368136  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.654278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.658434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.658602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.867554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.151825  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.154981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.155043  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.368227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.651587  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.654841  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.868253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.151568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.154906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.368332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.652244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.654695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.654772  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.867872  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.152199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.155137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.367783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.652699  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.654783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.654979  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.868132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.152259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.154768  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.367668  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.652881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.655002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.655049  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.868381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.151518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.367620  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.651888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.655083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.655175  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.868708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.152144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.155438  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.155487  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.367472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.652234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.654836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.654874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.867903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.152561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.154532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.154668  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.367739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.655541  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.867577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.155130  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.368654  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.652953  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.654943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.654982  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.868114  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.151581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.155143  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.368473  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.651816  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.655282  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.867147  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.151121  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.155456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.367218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.651621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.654783  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.152018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.155576  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.367896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.655222  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.655273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.867265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.151348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.156159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.156250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.367497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.652167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.655608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.655715  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.867725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.155471  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.155479  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.367579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.652472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.867055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.153048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.155508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.155556  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.367853  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.653083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.655046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.655090  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.867138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.152134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.155674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.367789  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.652335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.654809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.654932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.868697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.152531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.154911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.154955  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.370805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.652428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.654916  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.868557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.151860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.155090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.155145  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.367368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.651698  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.654852  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.868069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.151519  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.154937  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.154942  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.368515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.526750  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:51:09.652541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.655572  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.655659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.868054  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:51:10.098163  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:51:10.098324  522590 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0916 23:51:10.152880  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.154875  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.367834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:10.652251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.655021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.655084  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.867384  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.151842  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.155150  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.368186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.652269  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.655256  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.867128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.152667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.155107  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.652518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.654870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.867312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.151982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.155271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.155332  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.367823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.652387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.654951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.868844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.153334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.155643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.155904  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.368482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.652515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.655724  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.152601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.155604  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.652539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.655836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.655906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.868440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.151573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.154807  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.368168  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.652042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.655560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.655747  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.151965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.155140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.155210  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.368464  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.652037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.655823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.867935  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.152022  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.155517  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.367482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.651927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.654865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.655024  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.868282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.151370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.155878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.155924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.651943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.868827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.151845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.155066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.155072  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.369339  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.651811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.654774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.654963  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.867983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.152276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.154893  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.367794  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.652538  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.654934  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.654939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.867898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.151949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.155295  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.155445  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.367407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.651590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.655019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.867887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.152190  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.155502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.155545  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.367753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.652562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.654651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.152073  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.155610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.367957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.868057  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.152408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.155409  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.155602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.652052  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.655209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.655312  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.151535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.155823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.155856  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.651651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.654990  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.867537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.152091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.155112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.155142  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.654137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.656355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.656515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.869096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.154581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.154673  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.367987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.652294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.654753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.654853  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.869651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.154807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.154850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.368887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.654241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.655196  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.151919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.155232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.155296  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.367463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.867385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.151552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.154871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.154947  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.369090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.652787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.654631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.869965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.152268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.154797  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.154858  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.368137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.654729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.654778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.868357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.151932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.155182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.155339  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.367560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.651975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.655413  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.867981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.152479  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.155002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.155059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.368688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.651549  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.655000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.655063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.868189  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.151809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.155205  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.155350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.367322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.651627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.752333  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.752426  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.868016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.155466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.368191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.654883  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.868252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.152153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.155806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.155969  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.368131  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.652021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.655754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.655968  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.869697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.152009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.155144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.155151  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.369995  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.652185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.655536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.655553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.867639  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.151740  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.154964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.155029  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.368608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.651802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.654961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.869716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.152077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.155323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.155354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.367481  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.651750  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.655154  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.867047  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.152227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.154790  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.154936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.367727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.655578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.655618  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.869685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.152239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.154748  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.367986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.654796  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.868157  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.151984  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.155093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.155268  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.367574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.652278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.867108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.151635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.155169  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.367632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.656348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.656416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.867492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.155082  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.368046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.652581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.655278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.655440  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.867304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.151985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.155139  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.367275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.652201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.654659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.654708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.867813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.152102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.368132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.652347  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.654903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.654929  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.868615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.151762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.154894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.155015  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.367728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.652716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.655105  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.655114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.867844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.151899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.367647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.651960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.655182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.867701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.152323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.368036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.652752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.655140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.867998  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.152002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.155125  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.155152  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.652049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.655522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.655726  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.868294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.151791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.155565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.367865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.652161  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.655672  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.868579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.151650  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.154924  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.155034  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.369092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.651132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.655513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.655522  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.868691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.152450  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.155354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.155524  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.367600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.651882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.655373  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.655408  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.867056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.152214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.154682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.154691  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.367828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.652289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.654838  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.654919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.868482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.155680  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.367605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.652000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.655628  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.867754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.152556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.155095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.367975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.654741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.868401  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.153486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.155941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.156005  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.652886  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.654744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.867833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.152068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.155056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.368282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.651560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.654879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.654906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.868124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.151834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.368228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.654864  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.655039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.152355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.155216  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.155250  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.367206  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.651490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.655688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.655736  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.868528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.152001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.155683  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.367787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.652284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.654849  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.868355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.151870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.155589  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.369165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.655412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.655514  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.867952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.152595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.154738  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.154768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.368177  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.651492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.654766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.654890  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.867847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.155407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.155591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.367682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.652426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.655066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.655077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.868692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.151879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.154999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.368983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.652433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.655105  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.867405  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.151744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.651596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.654914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.655059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.868458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.152215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.154616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.154655  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.367845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.652783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.655112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.655120  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.151544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.155208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.155226  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.367504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.652199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.152537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.155961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.155972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.652499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.655560  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.153765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.156270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.156301  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.367137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.652938  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.655212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.655254  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.867526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.152762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.155539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.155611  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.367745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.653490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.655575  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.867930  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.152233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.154692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.154928  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.368718  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.655076  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.868860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.152353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.154742  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.367623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.655140  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.867455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.151851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.155247  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.367164  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.652193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.655452  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.655496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.867913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.152181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.155667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.155764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.368289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.651762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.654913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.654985  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.868273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.152523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.155730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.156762  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.369278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.653153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.656847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.656957  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.872367  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.152950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.155208  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.368554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.652083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.656110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.656132  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.867845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.152657  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.155336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.155360  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.367646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.652603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.655013  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.655062  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.868632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.151907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.155327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.155416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.367287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.651614  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.654920  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.867932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.155722  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.367894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.652307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.654756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.654995  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.869050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.151999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.155129  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.155241  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.367234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.655801  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.867063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.152370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.368226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.651514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.654966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.654979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.867379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.152074  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.155478  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.155627  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.367613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.651861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.655241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.655314  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.867408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.151695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.155047  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.368563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.655425  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.867208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.151957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.156991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.157177  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.367383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.651982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.655413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.655465  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.867368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.151925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.154970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.155019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.368160  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.651611  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.654847  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.654859  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.867942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.152874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.154630  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.154694  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.368049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.651257  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.655624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.655667  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.867801  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.152524  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.156020  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.156108  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.651663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.655003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.655207  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.867344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.152248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.154952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.155114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.368836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.652345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.868484  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.151558  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.154855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.154863  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.368442  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.651568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.655180  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.868266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.151815  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.155240  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.367272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.651711  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.655134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.655194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.867490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.151598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.155259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.367609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.651854  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.655324  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.867858  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.153080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.155098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.155341  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.367674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.651945  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.655335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.655353  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.151897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.155637  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.155683  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.367456  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.652090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.655528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.655648  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.152606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.154994  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.368455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.652303  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.655073  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.867363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.151724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.155569  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.367351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.651839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.655606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.868338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.152142  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.155217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.155532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.368358  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.651898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.655540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.655567  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.868334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.154861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.154907  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.368768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.652068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.655573  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.869959  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.152619  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.154675  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.367925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.868289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.152483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.154991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.155032  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.368359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.651646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.655296  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.867137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.152187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.155835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.155854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.367912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.652016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.655327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.867319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.151608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.154828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.155016  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.368488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.653811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.656445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.656565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.867120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.152791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.154576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.154723  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.367602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.651437  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.655676  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.867828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.152180  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.155737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.155763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.367992  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.652246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.654603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.868092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.152800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.154702  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.154910  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.367595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.654693  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.867547  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.151877  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.155211  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.367273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.651756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.655367  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.867318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.151786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.155034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.155115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.651521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.655726  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.655766  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.868163  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.151496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.155243  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.366955  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.652531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.655173  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.655184  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.867097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.152201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.155505  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.155636  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.367562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.651843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.655301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.655384  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.868028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.152914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.155252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.155462  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.367149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.651713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.655354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.655450  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.867440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.151891  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.368461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.901721  522590 kapi.go:107] duration metric: took 3m34.537544348s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 23:52:52.906543  522590 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-069011 cluster.
	I0916 23:52:52.912324  522590 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 23:52:52.913737  522590 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 23:52:53.153197  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:53.155666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.652828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.655014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.655110  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.152324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.155476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.155496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.655581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.655609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.152128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.155885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.156039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.652641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.654978  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.152674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.154874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.155000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.652035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.655457  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.655496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:57.152186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.155542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:57.155561  522590 kapi.go:107] duration metric: took 3m45.503354476s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 23:52:57.652350  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.655498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.154850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.652665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.152543  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.154283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.653277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.659941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.152852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.154649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.652327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.654800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.154525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.651817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.655138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.653502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.656037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.151857  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.155055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.652334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.152174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.155870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.653124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.153568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.155625  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.653230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.655236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.152361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.154928  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.653059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.656200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.152336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.652346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.655712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.653610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.152628  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.154934  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.655144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.154348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.155986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.652369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.152148  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.155670  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.652553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.655243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.152796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.155106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.651747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.655634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.153010  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.155374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.654738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.656482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.152952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.652523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.152364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.155721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.655954  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.656795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.152967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.154926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.653027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.153039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.653034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.156123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.651828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.151648  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.652222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.654551  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.655101  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.651672  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.655009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.152329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.652063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.655272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.152182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.155422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.652218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.654560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.152574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.155253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.652502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.151663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.155115  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.655044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.152383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.155509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.652354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.654747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.169011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.169001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.653424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.655714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.152979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.254144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.651804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.655470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.151827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.155108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.652422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.152193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.155976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.652210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.654980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.151709  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.155038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.651589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.655050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.151868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.155145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.652363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.655892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.151643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.154810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.653583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.655279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.153153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.155522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.652584  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.151580  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.156561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.652732  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.655133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.155361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.158601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.652275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.654674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.153755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.155714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.652926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.151466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.154733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.653313  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.655745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.152234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.155638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.652445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.654541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.152461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.652312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.654686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.155170  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.651644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.152309  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.154360  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.654550  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.151904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.154960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.652091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.655542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.151570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.652708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.654522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.151593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.154608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.651922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.151376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.155482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.151782  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.154824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.652429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.152137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.154936  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.651792  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.654929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.152207  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.652077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.655059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.152055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.155283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.654677  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.152004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.154803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.653046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.654923  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.154978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.651950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.654986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.151595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.154725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.652661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.654540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.155079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.652239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.654476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.151772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.155226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.655124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.151415  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.155604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.652777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.152275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.155829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.653025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.152978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.154716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.652635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.152070  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.155270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.652577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.655424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.152756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.154426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.651964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.655181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.151369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.155561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.651593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.654586  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.152252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.654423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.152030  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.155167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.651855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.654881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.151556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.654500  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.152255  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.154344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.652483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.655325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.151729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.154664  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.652904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.654681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.152267  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.652291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.151577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.154865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.654618  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.152302  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.154688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.653092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.654963  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.151758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.154735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.652999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.154498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.654909  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.151298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.155557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.652643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.654491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.152751  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.652126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.655183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.151763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.155046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.152658  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.154758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.652985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.655060  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.151705  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.154775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.652773  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.654589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.152592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.155097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.651889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.152217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.652903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.152686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.154506  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.652260  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.654251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.154777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.652915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.152381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.155278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.651555  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.152695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.652919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.151929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.155096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.652215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.654600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.152243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.154806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.655336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.151915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.154836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.152467  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.653379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.655466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.151800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.155291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.653102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.153140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.654838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.153210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.155329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.152491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.154729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.653037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.654741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.152830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.154474  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.652230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.654509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.151920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.154827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.653191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.655219  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.151306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.155960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.651717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.655110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.152304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.154575  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.652514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.654778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.652961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.654330  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.655691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.152418  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.154851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.651435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.654582  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.153087  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.155042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.654583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.152997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.154432  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.652600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.152066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.651875  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.655064  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.152238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.154411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.655370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.152256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.154799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.652896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.655256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.152778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.154615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.652772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.654597  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.152798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.155091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.652248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.654728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.152282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.154468  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.652120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.655482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.151671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.653242  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.654823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.152812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.152839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.155119  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.652214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.654840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.152996  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.155254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.651623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.153897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.155803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.652443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.654867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.152374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.154640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.653033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.654888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.152649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.154604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.652521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.654615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.152209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.154579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.652590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.654414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.651951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.655307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.151878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.651739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.654805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.152326  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.154364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.654812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.152821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.154939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.651434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.152103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.155132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.655072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.154539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.155149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.654796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.151638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.154787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.652885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.152069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.652069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.655407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.152172  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.156173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.652301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.654808  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.153293  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.155684  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.652844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.654749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.652609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.151757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.652511  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.654688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.152258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.154829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.653049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.151579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.154591  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.652331  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.654994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.151784  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.154921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.655067  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.151900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.155072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.651978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.655300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.151961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.154914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.654644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.152090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.155188  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.652025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.652821  522590 kapi.go:107] duration metric: took 6m0.000625805s to wait for kubernetes.io/minikube-addons=registry ...
	W0916 23:55:11.652991  522590 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0916 23:55:12.148606  522590 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0916 23:55:12.148655  522590 kapi.go:107] duration metric: took 6m0.000415083s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0916 23:55:12.148771  522590 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0916 23:55:12.151062  522590 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, volumesnapshots, gcp-auth, ingress
	I0916 23:55:12.152575  522590 addons.go:514] duration metric: took 6m2.25568849s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher cloud-spanner metrics-server yakd volumesnapshots gcp-auth ingress]
	I0916 23:55:12.152638  522590 start.go:246] waiting for cluster config update ...
	I0916 23:55:12.152661  522590 start.go:255] writing updated cluster config ...
	I0916 23:55:12.152955  522590 ssh_runner.go:195] Run: rm -f paused
	I0916 23:55:12.157549  522590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:12.161141  522590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.165703  522590 pod_ready.go:94] pod "coredns-66bc5c9577-m872b" is "Ready"
	I0916 23:55:12.165731  522590 pod_ready.go:86] duration metric: took 4.567019ms for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.168067  522590 pod_ready.go:83] waiting for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.172550  522590 pod_ready.go:94] pod "etcd-addons-069011" is "Ready"
	I0916 23:55:12.172583  522590 pod_ready.go:86] duration metric: took 4.489308ms for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.174872  522590 pod_ready.go:83] waiting for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.179401  522590 pod_ready.go:94] pod "kube-apiserver-addons-069011" is "Ready"
	I0916 23:55:12.179432  522590 pod_ready.go:86] duration metric: took 4.532992ms for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.181473  522590 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.561817  522590 pod_ready.go:94] pod "kube-controller-manager-addons-069011" is "Ready"
	I0916 23:55:12.561846  522590 pod_ready.go:86] duration metric: took 380.349392ms for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.763149  522590 pod_ready.go:83] waiting for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.161850  522590 pod_ready.go:94] pod "kube-proxy-v85kq" is "Ready"
	I0916 23:55:13.161880  522590 pod_ready.go:86] duration metric: took 398.696904ms for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.362802  522590 pod_ready.go:83] waiting for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761895  522590 pod_ready.go:94] pod "kube-scheduler-addons-069011" is "Ready"
	I0916 23:55:13.761929  522590 pod_ready.go:86] duration metric: took 399.094008ms for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761944  522590 pod_ready.go:40] duration metric: took 1.604356273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:13.810173  522590 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:55:13.812279  522590 out.go:179] * Done! kubectl is now configured to use "addons-069011" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:02:07 addons-069011 crio[933]: time="2025-09-17 00:02:07.174677596Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=24ca15c1-6cf1-4e49-a7dd-8f552b2dae3c name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:08 addons-069011 crio[933]: time="2025-09-17 00:02:08.327837612Z" level=info msg="Pulling image: docker.io/nginx:latest" id=79c87698-232c-4da1-81d6-ca4c7ca98f62 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:02:08 addons-069011 crio[933]: time="2025-09-17 00:02:08.330984468Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 17 00:02:17 addons-069011 crio[933]: time="2025-09-17 00:02:17.174420985Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=17896da0-2190-4305-8d40-a32bd03d601a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:17 addons-069011 crio[933]: time="2025-09-17 00:02:17.174716987Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=17896da0-2190-4305-8d40-a32bd03d601a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:19 addons-069011 crio[933]: time="2025-09-17 00:02:19.174864387Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=90582a46-099e-45df-a730-919c2215de59 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:19 addons-069011 crio[933]: time="2025-09-17 00:02:19.175194615Z" level=info msg="Image docker.io/nginx:alpine not found" id=90582a46-099e-45df-a730-919c2215de59 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:21 addons-069011 crio[933]: time="2025-09-17 00:02:21.174767908Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=da50a2e2-5bfd-4eef-a5cc-575d196a18fe name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:21 addons-069011 crio[933]: time="2025-09-17 00:02:21.175122717Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=da50a2e2-5bfd-4eef-a5cc-575d196a18fe name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:23 addons-069011 crio[933]: time="2025-09-17 00:02:23.174406123Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=68a7b2cb-423f-4118-a4b3-c8d8a74a76a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:23 addons-069011 crio[933]: time="2025-09-17 00:02:23.174736978Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=68a7b2cb-423f-4118-a4b3-c8d8a74a76a8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:31 addons-069011 crio[933]: time="2025-09-17 00:02:31.174803554Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=a2b8d00a-e6d9-4b2f-a014-52d7160d8eb3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:31 addons-069011 crio[933]: time="2025-09-17 00:02:31.175078367Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=a2b8d00a-e6d9-4b2f-a014-52d7160d8eb3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:34 addons-069011 crio[933]: time="2025-09-17 00:02:34.175554487Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=2eace529-f3a3-4024-8834-c5047ca8c1c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:34 addons-069011 crio[933]: time="2025-09-17 00:02:34.175867923Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=2eace529-f3a3-4024-8834-c5047ca8c1c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:36 addons-069011 crio[933]: time="2025-09-17 00:02:36.174412128Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=c3be7c1d-9583-42c5-8235-17f756a693c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:36 addons-069011 crio[933]: time="2025-09-17 00:02:36.174761085Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=c3be7c1d-9583-42c5-8235-17f756a693c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:38 addons-069011 crio[933]: time="2025-09-17 00:02:38.420564853Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=371c09d4-b5e2-4516-b98d-56e0ef8ca3ce name=/runtime.v1.ImageService/PullImage
	Sep 17 00:02:38 addons-069011 crio[933]: time="2025-09-17 00:02:38.426762485Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 17 00:02:46 addons-069011 crio[933]: time="2025-09-17 00:02:46.174573325Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=e92776c5-2260-4c62-88d0-bac25ecc4762 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:46 addons-069011 crio[933]: time="2025-09-17 00:02:46.174931830Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=e92776c5-2260-4c62-88d0-bac25ecc4762 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174617707Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=d6a13927-c266-49dc-be8d-0a633ee3e91a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174749174Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=faf20ee6-c881-4a52-a381-2762590afbb2 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174937285Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=faf20ee6-c881-4a52-a381-2762590afbb2 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174957387Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=d6a13927-c266-49dc-be8d-0a633ee3e91a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8fc15d8cb7dd5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago       Running             csi-snapshotter                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	295b9edc02db1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          5 minutes ago       Running             csi-provisioner                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	3bebfc3ce5f89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   b34e9dc849123       busybox
	0994d530b2186       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   e614fc1047195       csi-hostpathplugin-s98vb
	d78ede218b3d9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   e614fc1047195       csi-hostpathplugin-s98vb
	16a4495ac9a55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   e614fc1047195       csi-hostpathplugin-s98vb
	ab63cb98da9fa       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             9 minutes ago       Running             controller                               0                   1c8433f3bdf68       ingress-nginx-controller-9cc49f96f-4m84v
	cb0aaa55cf5e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            10 minutes ago      Running             gadget                                   0                   38b62a86f7523       gadget-g862x
	75b35093f1f14       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              11 minutes ago      Running             registry-proxy                           0                   f2e835ff4c172       registry-proxy-gtpv9
	af48fae595f24       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      11 minutes ago      Running             volume-snapshot-controller               0                   7daa29e729a88       snapshot-controller-7d9fbc56b8-st98r
	fce1ccd8d33b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   11 minutes ago      Running             csi-external-health-monitor-controller   0                   e614fc1047195       csi-hostpathplugin-s98vb
	87609248fc31a       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               12 minutes ago      Running             cloud-spanner-emulator                   0                   843001c23149a       cloud-spanner-emulator-85f6b7fc65-wtp6g
	0e4759a430832       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             12 minutes ago      Exited              patch                                    2                   0937f6f98ea11       ingress-nginx-admission-patch-sp7zb
	3c653d4c50b5c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      12 minutes ago      Running             volume-snapshot-controller               0                   4be25aad82a4e       snapshot-controller-7d9fbc56b8-s7m82
	11ae5f470bf10       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   12 minutes ago      Exited              create                                   0                   d933a3ae75df0       ingress-nginx-admission-create-wj8lw
	0957eacca23bd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              12 minutes ago      Running             csi-resizer                              0                   b8131d2ee78de       csi-hostpath-resizer-0
	ad4a09c21105c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             12 minutes ago      Running             csi-attacher                             0                   15f9a9c33b53e       csi-hostpath-attacher-0
	c1b11b9e2fae1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             12 minutes ago      Running             local-path-provisioner                   0                   be69758a594c2       local-path-provisioner-648f6765c9-4qs6g
	7d0db99be084d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             12 minutes ago      Running             storage-provisioner                      0                   e26878809420e       storage-provisioner
	b62ac7b1e2d93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             12 minutes ago      Running             coredns                                  0                   90cd65a058e3e       coredns-66bc5c9577-m872b
	81f4db589dfd0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             13 minutes ago      Running             kindnet-cni                              0                   282dceccf27e4       kindnet-hn7tx
	8204c89cdc90d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             13 minutes ago      Running             kube-proxy                               0                   076ce47b67764       kube-proxy-v85kq
	d1d2d3ef1a2d6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             13 minutes ago      Running             kube-controller-manager                  0                   2befa508c819b       kube-controller-manager-addons-069011
	f4991aa96dbe9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             13 minutes ago      Running             kube-apiserver                           0                   24f1de8dafedd       kube-apiserver-addons-069011
	ecbc264153ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             13 minutes ago      Running             kube-scheduler                           0                   3af000cb5a57c       kube-scheduler-addons-069011
	5a81076e6d9a8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             13 minutes ago      Running             etcd                                     0                   f590790ed13d4       etcd-addons-069011
	
	
	==> coredns [b62ac7b1e2d935063ca8c0594642886e49ad0423507f04d148e7bd385ca935ce] <==
	[INFO] 10.244.0.16:42316 - 8538 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00553318s
	[INFO] 10.244.0.16:42316 - 52070 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000068908s
	[INFO] 10.244.0.16:42316 - 8651 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000079183s
	[INFO] 10.244.0.16:42316 - 1760 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000085438s
	[INFO] 10.244.0.16:42316 - 43217 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000140841s
	[INFO] 10.244.0.16:42316 - 22058 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000069054s
	[INFO] 10.244.0.16:42316 - 11896 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000075172s
	[INFO] 10.244.0.16:42316 - 54539 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127178s
	[INFO] 10.244.0.16:42316 - 38459 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000128581s
	[INFO] 10.244.0.16:44990 - 5588 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000214845s
	[INFO] 10.244.0.16:44990 - 20588 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000118777s
	[INFO] 10.244.0.16:44990 - 29478 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000149894s
	[INFO] 10.244.0.16:44990 - 20194 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000195892s
	[INFO] 10.244.0.16:44990 - 22826 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000116581s
	[INFO] 10.244.0.16:44990 - 13057 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000214831s
	[INFO] 10.244.0.16:44990 - 64351 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005730969s
	[INFO] 10.244.0.16:44990 - 24542 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.008325856s
	[INFO] 10.244.0.16:44990 - 10377 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000106774s
	[INFO] 10.244.0.16:44990 - 36733 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000147242s
	[INFO] 10.244.0.16:44990 - 3573 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000076942s
	[INFO] 10.244.0.16:44990 - 37650 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000129672s
	[INFO] 10.244.0.16:44990 - 33988 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000079301s
	[INFO] 10.244.0.16:44990 - 47854 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000116296s
	[INFO] 10.244.0.16:44990 - 36670 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000206157s
	[INFO] 10.244.0.16:44990 - 56241 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000225366s
	
	
	==> describe nodes <==
	Name:               addons-069011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-069011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-069011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-069011
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-069011"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:49:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-069011
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:02:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-069011
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e6a06e1e17043f19f3b8f5ea0927359
	  System UUID:                fa23b867-4022-409a-8baa-bf981ffedafe
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  default                     cloud-spanner-emulator-85f6b7fc65-wtp6g     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  gadget                      gadget-g862x                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-4m84v    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         13m
	  kube-system                 amd-gpu-device-plugin-flfw9                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-m872b                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-s98vb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-069011                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-hn7tx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-addons-069011                250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-069011       200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-v85kq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-069011                100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-66898fdd98-bl4r5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-proxy-gtpv9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-s7m82        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-st98r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          local-path-provisioner-648f6765c9-4qs6g     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-069011 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-069011 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-069011 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node addons-069011 event: Registered Node addons-069011 in Controller
	  Normal  NodeReady                12m   kubelet          Node addons-069011 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [5a81076e6d9a8c9983866e09b1190810cd0059c34edeae1a479f9d18f3003a91] <==
	{"level":"warn","ts":"2025-09-16T23:49:00.991705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:00.999124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.014667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.021210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.034514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.041663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.048524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.061680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.068240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.075225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.081757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.105206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.111554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.154896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.666348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.673196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.575058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.581784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.598000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.605378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:59:00.630787Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1449}
	{"level":"info","ts":"2025-09-16T23:59:00.656834Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1449,"took":"25.282457ms","hash":3232880921,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3645440,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-09-16T23:59:00.656898Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3232880921,"revision":1449,"compact-revision":-1}
	
	
	==> kernel <==
	 00:02:49 up  2:45,  0 users,  load average: 0.96, 4.13, 29.63
	Linux addons-069011 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [81f4db589dfd0f8f014a7fc056f2d7f752ecc52737aea10ae2f8a98d0242428b] <==
	I0917 00:00:40.184895       1 main.go:301] handling current node
	I0917 00:00:50.185931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:00:50.185986       1 main.go:301] handling current node
	I0917 00:01:00.185582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:00.185621       1 main.go:301] handling current node
	I0917 00:01:10.184174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:10.184202       1 main.go:301] handling current node
	I0917 00:01:20.189514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:20.189560       1 main.go:301] handling current node
	I0917 00:01:30.185700       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:30.186123       1 main.go:301] handling current node
	I0917 00:01:40.186524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:40.186565       1 main.go:301] handling current node
	I0917 00:01:50.186736       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:50.186791       1 main.go:301] handling current node
	I0917 00:02:00.185635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:00.185677       1 main.go:301] handling current node
	I0917 00:02:10.184347       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:10.184424       1 main.go:301] handling current node
	I0917 00:02:20.185542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:20.185579       1 main.go:301] handling current node
	I0917 00:02:30.184600       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:30.184649       1 main.go:301] handling current node
	I0917 00:02:40.184804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:40.184855       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4991aa96dbe98af7f934784cdc7973d5aabec72325938f0e98ad8efde3d06e3] <==
	I0916 23:51:40.656101       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:52:37.080075       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:53:06.365528       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:53:51.505661       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:54:19.846477       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:21.099421       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:29.068080       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:56:24.856015       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0916 23:56:38.562764       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43110: use of closed network connection
	E0916 23:56:38.758708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43158: use of closed network connection
	I0916 23:56:47.547088       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 23:56:47.750812       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.94.177"}
	I0916 23:56:48.077381       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.184.141"}
	I0916 23:56:56.387694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0916 23:56:58.875443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:57:28.517320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:21.717919       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:53.740979       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:01.561467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:59:46.839359       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:03.548840       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:10.960424       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:15.531695       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:28.446522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:31.841808       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [d1d2d3ef1a2d61d604d7b7b71875c31a98127791ebbcaaae9e7c5dcebb1fd036] <==
	I0916 23:49:08.558692       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0916 23:49:08.559424       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:49:08.560582       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0916 23:49:08.560682       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0916 23:49:08.562044       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:49:08.562105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:49:08.562171       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:49:08.562209       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:49:08.562217       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:49:08.562221       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:49:08.563325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:08.564561       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:49:08.570797       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-069011" podCIDRs=["10.244.0.0/24"]
	I0916 23:49:08.576824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0916 23:49:38.568454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 23:49:38.568633       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0916 23:49:38.568684       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0916 23:49:38.586865       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0916 23:49:38.591210       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0916 23:49:38.668805       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:38.692110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:49:53.514314       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 23:56:52.202912       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0916 23:58:53.764380       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0917 00:01:02.592919       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [8204c89cdc90d58370aa745a3053c12e5b976409a1e0bedddf9508ac3e770c1f] <==
	I0916 23:49:09.803647       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:49:09.874911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:49:09.984976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:49:09.985628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:49:09.986296       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:49:10.154642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:49:10.159433       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:49:10.183201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:49:10.195463       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:49:10.195513       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:49:10.199563       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:49:10.199664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:49:10.200188       1 config.go:309] "Starting node config controller"
	I0916 23:49:10.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:49:10.200334       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:49:10.200369       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:49:10.200991       1 config.go:200] "Starting service config controller"
	I0916 23:49:10.201078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:49:10.299859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:49:10.300474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:49:10.300501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:49:10.302086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ecbc264153ff2a219390febac6665f8efc1a49ab24db502b79ba6888e6bd5b71] <==
	E0916 23:49:01.591306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0916 23:49:01.591979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:01.591995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:01.592038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:01.592032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0916 23:49:01.592058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:01.592081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0916 23:49:01.592128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:01.592273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:01.592272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:49:01.592315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0916 23:49:02.478666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0916 23:49:02.478742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0916 23:49:02.495998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:02.533597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:02.645572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:02.658831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0916 23:49:02.700650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:02.730028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:02.731014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0916 23:49:02.807698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:49:02.811032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:02.813063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0916 23:49:02.832467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0916 23:49:05.387364       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:02:17 addons-069011 kubelet[1557]: E0917 00:02:17.175118    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="3ebf3aba-8898-42b1-a92e-3bc50dd56aab"
	Sep 17 00:02:21 addons-069011 kubelet[1557]: I0917 00:02:21.174096    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-flfw9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:21 addons-069011 kubelet[1557]: E0917 00:02:21.175543    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	Sep 17 00:02:23 addons-069011 kubelet[1557]: E0917 00:02:23.175085    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:02:24 addons-069011 kubelet[1557]: E0917 00:02:24.347606    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067344347291152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:24 addons-069011 kubelet[1557]: E0917 00:02:24.347639    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067344347291152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:27 addons-069011 kubelet[1557]: I0917 00:02:27.174103    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gtpv9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:27 addons-069011 kubelet[1557]: I0917 00:02:27.174342    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:31 addons-069011 kubelet[1557]: E0917 00:02:31.175461    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="3ebf3aba-8898-42b1-a92e-3bc50dd56aab"
	Sep 17 00:02:34 addons-069011 kubelet[1557]: I0917 00:02:34.175036    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-flfw9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:34 addons-069011 kubelet[1557]: E0917 00:02:34.176137    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	Sep 17 00:02:34 addons-069011 kubelet[1557]: E0917 00:02:34.350437    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067354350122132  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:34 addons-069011 kubelet[1557]: E0917 00:02:34.350475    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067354350122132  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:36 addons-069011 kubelet[1557]: E0917 00:02:36.175098    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.419997    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.420068    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.420288    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0b15e693-4577-4039-b409-5badaa871bfc): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.420346    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.538636    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:02:44 addons-069011 kubelet[1557]: E0917 00:02:44.352870    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067364352566794  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:44 addons-069011 kubelet[1557]: E0917 00:02:44.352916    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067364352566794  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:46 addons-069011 kubelet[1557]: E0917 00:02:46.175316    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="3ebf3aba-8898-42b1-a92e-3bc50dd56aab"
	Sep 17 00:02:47 addons-069011 kubelet[1557]: I0917 00:02:47.174228    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-flfw9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:47 addons-069011 kubelet[1557]: E0917 00:02:47.175251    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	Sep 17 00:02:47 addons-069011 kubelet[1557]: E0917 00:02:47.175265    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	
	
	==> storage-provisioner [7d0db99be084d7a7996f085af51ba0b4b9263d1a30c5ba98cac79995b3641b35] <==
	W0917 00:02:24.437664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:26.441282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:26.445125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:28.447926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:28.451949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:30.454961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:30.459784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:32.463557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:32.469237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:34.472655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:34.476919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:36.480741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:36.486541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:38.490154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:38.495746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:40.499229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:40.503656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:42.506923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:42.511258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:44.514610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:44.519161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:46.523063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:46.527609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:48.531031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:48.535325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
helpers_test.go:269: (dbg) Run:  kubectl --context addons-069011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-069011 describe pod nginx task-pv-pod ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-069011 describe pod nginx task-pv-pod ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5: exit status 1 (87.319312ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Tue, 16 Sep 2025 23:56:47 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kksmh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kksmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/nginx to addons-069011
	  Warning  Failed     72s (x3 over 4m17s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x3 over 4m17s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    46s (x4 over 4m17s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     46s (x4 over 4m17s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    31s (x4 over 6m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:01:13 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfz5d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-rfz5d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age               From               Message
	  ----     ------     ----              ----               -------
	  Normal   Scheduled  97s               default-scheduler  Successfully assigned default/task-pv-pod to addons-069011
	  Warning  Failed     12s               kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12s               kubelet            Error: ErrImagePull
	  Normal   BackOff    12s               kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     12s               kubelet            Error: ImagePullBackOff
	  Normal   Pulling    0s (x2 over 97s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wj8lw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sp7zb" not found
	Error from server (NotFound): pods "amd-gpu-device-plugin-flfw9" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "registry-66898fdd98-bl4r5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-069011 describe pod nginx task-pv-pod ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (363.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-069011 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-069011 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-069011 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [44795e64-34b3-4492-b6af-9e6353fa4bb4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-17 00:04:48.085359594 +0000 UTC m=+995.552703171
addons_test.go:252: (dbg) Run:  kubectl --context addons-069011 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-069011 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-069011/192.168.49.2
Start Time:       Tue, 16 Sep 2025 23:56:47 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.24
IPs:
IP:  10.244.0.24
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kksmh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kksmh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  8m1s                  default-scheduler  Successfully assigned default/nginx to addons-069011
Warning  Failed     100s (x4 over 6m15s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     100s (x4 over 6m15s)  kubelet            Error: ErrImagePull
Normal   BackOff    23s (x10 over 6m15s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     23s (x10 over 6m15s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    9s (x5 over 8m)       kubelet            Pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-069011 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-069011 logs nginx -n default: exit status 1 (73.928627ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-069011 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-069011
helpers_test.go:243: (dbg) docker inspect addons-069011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	        "Created": "2025-09-16T23:48:50.029636255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:48:50.075029861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hosts",
	        "LogPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1-json.log",
	        "Name": "/addons-069011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-069011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-069011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	                "LowerDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-069011",
	                "Source": "/var/lib/docker/volumes/addons-069011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-069011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-069011",
	                "name.minikube.sigs.k8s.io": "addons-069011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7ea0b62281ff8981f73b140342aff58601fbb663df7278dfdd6743a41abcca5",
	            "SandboxKey": "/var/run/docker/netns/f7ea0b62281f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-069011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:4c:3e:1e:87:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d62ec0fa3bfb3ffd62859a508f03996c549db14f34473599ddd1b9022067b7b9",
	                    "EndpointID": "f8f4fe858390c8f96bc24eec26736fad3a3b1ba30f09e93e016a6a79e947f7af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-069011",
	                        "678205c9d470"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-069011 -n addons-069011
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 logs -n 25: (1.43003862s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-515641 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p download-docker-660125 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p download-docker-660125                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p binary-mirror-785971 --alsologtostderr --binary-mirror http://127.0.0.1:38515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p binary-mirror-785971                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ addons  │ enable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ start   │ -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:55 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ enable headlamp -p addons-069011 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                           │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:57 UTC │
	│ addons  │ addons-069011 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ addons  │ addons-069011 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:00 UTC │ 17 Sep 25 00:00 UTC │
	│ addons  │ addons-069011 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ addons  │ addons-069011 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-069011 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-069011 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:27.723751  522590 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:27.723864  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.723869  522590 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:27.723873  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.724066  522590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0916 23:48:27.724618  522590 out.go:368] Setting JSON to false
	I0916 23:48:27.725494  522590 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9051,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:27.725585  522590 start.go:140] virtualization: kvm guest
	I0916 23:48:27.728073  522590 out.go:179] * [addons-069011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:27.729850  522590 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:48:27.729868  522590 notify.go:220] Checking for updates...
	I0916 23:48:27.733822  522590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:27.736141  522590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:27.738039  522590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:27.740423  522590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:48:27.743368  522590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:48:27.746574  522590 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:27.771724  522590 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:27.771874  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.829971  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.818365984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.830249  522590 docker.go:318] overlay module found
	I0916 23:48:27.832946  522590 out.go:179] * Using the docker driver based on user configuration
	I0916 23:48:27.834751  522590 start.go:304] selected driver: docker
	I0916 23:48:27.834826  522590 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:27.834849  522590 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:48:27.835571  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.897913  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.886229333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.898100  522590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:27.898315  522590 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:48:27.900183  522590 out.go:179] * Using Docker driver with root privileges
	I0916 23:48:27.901481  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:27.901597  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:27.901613  522590 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:48:27.901710  522590 start.go:348] cluster config:
	{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0916 23:48:27.903324  522590 out.go:179] * Starting "addons-069011" primary control-plane node in "addons-069011" cluster
	I0916 23:48:27.904623  522590 cache.go:123] Beginning downloading kic base image for docker with crio
	I0916 23:48:27.905841  522590 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:48:27.907270  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:27.907330  522590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:48:27.907328  522590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:27.907354  522590 cache.go:58] Caching tarball of preloaded images
	I0916 23:48:27.907495  522590 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 23:48:27.907513  522590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:48:27.907895  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:27.907924  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json: {Name:mk15dc7feab5fd17bb004b2e5f6ac3bc55ac0d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:27.925199  522590 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:27.925352  522590 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:48:27.925371  522590 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:48:27.925375  522590 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:48:27.925383  522590 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:48:27.925403  522590 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0916 23:48:40.932191  522590 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0916 23:48:40.932224  522590 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:48:40.932259  522590 start.go:360] acquireMachinesLock for addons-069011: {Name:mk9387b718f452cc25627a84d4c20b7f46084ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:48:40.932371  522590 start.go:364] duration metric: took 90.542µs to acquireMachinesLock for "addons-069011"
	I0916 23:48:40.932411  522590 start.go:93] Provisioning new machine with config: &{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:48:40.932527  522590 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:48:40.934531  522590 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0916 23:48:40.934774  522590 start.go:159] libmachine.API.Create for "addons-069011" (driver="docker")
	I0916 23:48:40.934810  522590 client.go:168] LocalClient.Create starting
	I0916 23:48:40.934920  522590 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0916 23:48:41.819608  522590 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0916 23:48:42.094971  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:48:42.113173  522590 cli_runner.go:211] docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:48:42.113240  522590 network_create.go:284] running [docker network inspect addons-069011] to gather additional debugging logs...
	I0916 23:48:42.113258  522590 cli_runner.go:164] Run: docker network inspect addons-069011
	W0916 23:48:42.130815  522590 cli_runner.go:211] docker network inspect addons-069011 returned with exit code 1
	I0916 23:48:42.130846  522590 network_create.go:287] error running [docker network inspect addons-069011]: docker network inspect addons-069011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-069011 not found
	I0916 23:48:42.130884  522590 network_create.go:289] output of [docker network inspect addons-069011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-069011 not found
	
	** /stderr **
	I0916 23:48:42.130990  522590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:42.149832  522590 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002180220}
	I0916 23:48:42.149931  522590 network_create.go:124] attempt to create docker network addons-069011 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:48:42.150036  522590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-069011 addons-069011
	I0916 23:48:42.212157  522590 network_create.go:108] docker network addons-069011 192.168.49.0/24 created
	I0916 23:48:42.212194  522590 kic.go:121] calculated static IP "192.168.49.2" for the "addons-069011" container
	I0916 23:48:42.212312  522590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:48:42.229867  522590 cli_runner.go:164] Run: docker volume create addons-069011 --label name.minikube.sigs.k8s.io=addons-069011 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:48:42.252846  522590 oci.go:103] Successfully created a docker volume addons-069011
	I0916 23:48:42.252968  522590 cli_runner.go:164] Run: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:48:45.649491  522590 cli_runner.go:217] Completed: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (3.39647838s)
	I0916 23:48:45.649523  522590 oci.go:107] Successfully prepared a docker volume addons-069011
	I0916 23:48:45.649558  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:45.649589  522590 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:48:45.649695  522590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:48:49.956300  522590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.306552681s)
	I0916 23:48:49.956343  522590 kic.go:203] duration metric: took 4.306749088s to extract preloaded images to volume ...
	W0916 23:48:49.956477  522590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:48:49.956523  522590 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:48:49.956572  522590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:48:50.013382  522590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-069011 --name addons-069011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-069011 --network addons-069011 --ip 192.168.49.2 --volume addons-069011:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:48:50.304600  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Running}}
	I0916 23:48:50.323420  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.342386  522590 cli_runner.go:164] Run: docker exec addons-069011 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:48:50.402276  522590 oci.go:144] the created container "addons-069011" has a running status.
	I0916 23:48:50.402326  522590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa...
	I0916 23:48:50.521235  522590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:48:50.553384  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.579068  522590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:48:50.579099  522590 kic_runner.go:114] Args: [docker exec --privileged addons-069011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:48:50.638566  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.659803  522590 machine.go:93] provisionDockerMachine start ...
	I0916 23:48:50.660411  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.680019  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.680310  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.680332  522590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:48:50.820950  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.820990  522590 ubuntu.go:182] provisioning hostname "addons-069011"
	I0916 23:48:50.821063  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.841195  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.841673  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.841710  522590 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-069011 && echo "addons-069011" | sudo tee /etc/hostname
	I0916 23:48:50.996855  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.996967  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.016407  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.016637  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.016655  522590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-069011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-069011/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-069011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:48:51.154270  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:48:51.154311  522590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0916 23:48:51.154380  522590 ubuntu.go:190] setting up certificates
	I0916 23:48:51.154420  522590 provision.go:84] configureAuth start
	I0916 23:48:51.154487  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:51.173820  522590 provision.go:143] copyHostCerts
	I0916 23:48:51.173904  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0916 23:48:51.174069  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0916 23:48:51.174140  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0916 23:48:51.174195  522590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.addons-069011 san=[127.0.0.1 192.168.49.2 addons-069011 localhost minikube]
	I0916 23:48:51.417777  522590 provision.go:177] copyRemoteCerts
	I0916 23:48:51.417839  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:48:51.417897  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.435902  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:51.535686  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 23:48:51.563321  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:48:51.590971  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:48:51.617420  522590 provision.go:87] duration metric: took 462.978002ms to configureAuth
	I0916 23:48:51.617461  522590 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:48:51.617668  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:48:51.617795  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.638144  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.638409  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.638436  522590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 23:48:51.891077  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 23:48:51.891114  522590 machine.go:96] duration metric: took 1.230812219s to provisionDockerMachine
	I0916 23:48:51.891125  522590 client.go:171] duration metric: took 10.956309615s to LocalClient.Create
	I0916 23:48:51.891146  522590 start.go:167] duration metric: took 10.956377105s to libmachine.API.Create "addons-069011"
	I0916 23:48:51.891155  522590 start.go:293] postStartSetup for "addons-069011" (driver="docker")
	I0916 23:48:51.891170  522590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:48:51.891245  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:48:51.891288  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.909900  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.010593  522590 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:48:52.014317  522590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:48:52.014357  522590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:48:52.014366  522590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:48:52.014375  522590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:48:52.014406  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0916 23:48:52.014479  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0916 23:48:52.014515  522590 start.go:296] duration metric: took 123.348567ms for postStartSetup
	I0916 23:48:52.014852  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.034024  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:52.034357  522590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:48:52.034430  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.053383  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.147697  522590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:48:52.152300  522590 start.go:128] duration metric: took 11.219755748s to createHost
	I0916 23:48:52.152322  522590 start.go:83] releasing machines lock for "addons-069011", held for 11.219940729s
	I0916 23:48:52.152383  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.170897  522590 ssh_runner.go:195] Run: cat /version.json
	I0916 23:48:52.170959  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.170960  522590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:48:52.171033  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.190054  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.190316  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.282770  522590 ssh_runner.go:195] Run: systemctl --version
	I0916 23:48:52.358127  522590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 23:48:52.500662  522590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:48:52.505640  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.530299  522590 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:48:52.530413  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.562277  522590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:48:52.562302  522590 start.go:495] detecting cgroup driver to use...
	I0916 23:48:52.562333  522590 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:48:52.562405  522590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:48:52.578904  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:48:52.592493  522590 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:48:52.592567  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:48:52.607812  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:48:52.623718  522590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:48:52.695401  522590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:48:52.772869  522590 docker.go:234] disabling docker service ...
	I0916 23:48:52.772931  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:48:52.793499  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:48:52.806446  522590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:48:52.880604  522590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:48:52.994666  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:48:53.008181  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:48:53.026581  522590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0916 23:48:53.026648  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.040463  522590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0916 23:48:53.040546  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.052415  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.063700  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.074445  522590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:48:53.085081  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.097098  522590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.114871  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.125827  522590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:48:53.135170  522590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:48:53.145546  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.253634  522590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 23:48:53.356442  522590 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 23:48:53.356540  522590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 23:48:53.360459  522590 start.go:563] Will wait 60s for crictl version
	I0916 23:48:53.360526  522590 ssh_runner.go:195] Run: which crictl
	I0916 23:48:53.364103  522590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:48:53.402094  522590 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 23:48:53.402233  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.441123  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.481919  522590 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0916 23:48:53.483462  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:53.502054  522590 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:48:53.506129  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.518646  522590 kubeadm.go:875] updating cluster {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:48:53.518762  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:53.518816  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.590933  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.590961  522590 crio.go:433] Images already preloaded, skipping extraction
	I0916 23:48:53.591020  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.627023  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.627057  522590 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:48:53.627066  522590 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0916 23:48:53.627155  522590 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-069011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:48:53.627228  522590 ssh_runner.go:195] Run: crio config
	I0916 23:48:53.674869  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:53.674893  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:53.674906  522590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:48:53.674926  522590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-069011 NodeName:addons-069011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:48:53.675093  522590 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-069011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:48:53.675157  522590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:48:53.685496  522590 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:48:53.685568  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 23:48:53.695890  522590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 23:48:53.715420  522590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:48:53.738183  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:48:53.758975  522590 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 23:48:53.763002  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.775153  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.837066  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:48:53.861100  522590 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011 for IP: 192.168.49.2
	I0916 23:48:53.861120  522590 certs.go:194] generating shared ca certs ...
	I0916 23:48:53.861145  522590 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:53.861308  522590 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0916 23:48:54.155814  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt ...
	I0916 23:48:54.155846  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt: {Name:mk009b1713fd08c38e8c6ac054b69276424ded29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156071  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key ...
	I0916 23:48:54.156093  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key: {Name:mk39b68875de7851b17692da85e287f48166d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156213  522590 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0916 23:48:54.291541  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt ...
	I0916 23:48:54.291579  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt: {Name:mk94baf5fb1a8134bb0c9a9f3d32b751fe0bf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291793  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key ...
	I0916 23:48:54.291817  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key: {Name:mk06b3e70f919971eec12f66023f6279f2a9059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291928  522590 certs.go:256] generating profile certs ...
	I0916 23:48:54.292014  522590 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key
	I0916 23:48:54.292060  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt with IP's: []
	I0916 23:48:54.529110  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt ...
	I0916 23:48:54.529147  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: {Name:mk9156e00306316f93255eae42ecd81bb5d60b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529374  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key ...
	I0916 23:48:54.529406  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key: {Name:mk15bd78effcf8815d5571a84284c31db31b997e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529525  522590 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd
	I0916 23:48:54.529556  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 23:48:54.601370  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd ...
	I0916 23:48:54.601415  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd: {Name:mkb42f86b810cddd05c27083cd910769800b1942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.602548  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd ...
	I0916 23:48:54.602578  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd: {Name:mkf41ec91a0589b4d908c830ee946e4604a6886c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.603343  522590 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt
	I0916 23:48:54.603493  522590 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key
	I0916 23:48:54.603577  522590 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key
	I0916 23:48:54.603602  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt with IP's: []
	I0916 23:48:54.685718  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt ...
	I0916 23:48:54.685751  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt: {Name:mk4c4f7fbd326f3d00c11caa86441b715a5844e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.686777  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key ...
	I0916 23:48:54.686809  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key: {Name:mkde64e1b9ef5bdc16ad6f2b11b391d65f689b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.687062  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:48:54.687107  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0916 23:48:54.687130  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:48:54.687161  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0916 23:48:54.687932  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:48:54.717259  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:48:54.744669  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:48:54.771438  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 23:48:54.799454  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 23:48:54.826220  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:48:54.853243  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:48:54.878912  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 23:48:54.905711  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:48:54.935757  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:48:54.956698  522590 ssh_runner.go:195] Run: openssl version
	I0916 23:48:54.962817  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:48:54.976805  522590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.980979  522590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.981051  522590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.988637  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:48:55.000379  522590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:48:55.004385  522590 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:48:55.004456  522590 kubeadm.go:392] StartCluster: {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:48:55.004547  522590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 23:48:55.004599  522590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:48:55.043443  522590 cri.go:89] found id: ""
	I0916 23:48:55.043525  522590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:48:55.053975  522590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:48:55.064119  522590 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:48:55.064186  522590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:48:55.074381  522590 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:48:55.074421  522590 kubeadm.go:157] found existing configuration files:
	
	I0916 23:48:55.074469  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:48:55.084667  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:48:55.084749  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:48:55.095859  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:48:55.106006  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:48:55.106068  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:48:55.115485  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.124880  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:48:55.124952  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.134292  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:48:55.144662  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:48:55.144725  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:48:55.154111  522590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:48:55.211692  522590 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:48:55.271378  522590 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:49:04.949743  522590 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:49:04.949820  522590 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:49:04.949928  522590 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:49:04.950016  522590 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:49:04.950100  522590 kubeadm.go:310] OS: Linux
	I0916 23:49:04.950168  522590 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:49:04.950250  522590 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:49:04.950311  522590 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:49:04.950355  522590 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:49:04.950436  522590 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:49:04.950511  522590 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:49:04.950590  522590 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:49:04.950659  522590 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:49:04.950779  522590 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:49:04.950896  522590 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:49:04.950988  522590 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:49:04.951039  522590 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:49:04.953148  522590 out.go:252]   - Generating certificates and keys ...
	I0916 23:49:04.953253  522590 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:49:04.953350  522590 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:49:04.953473  522590 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:49:04.953544  522590 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:49:04.953598  522590 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:49:04.953656  522590 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:49:04.953723  522590 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:49:04.953871  522590 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.953944  522590 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:49:04.954104  522590 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.954204  522590 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:49:04.954308  522590 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:49:04.954373  522590 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:49:04.954472  522590 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:49:04.954529  522590 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:49:04.954641  522590 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:49:04.954719  522590 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:49:04.954827  522590 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:49:04.954889  522590 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:49:04.954961  522590 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:49:04.955029  522590 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:49:04.956667  522590 out.go:252]   - Booting up control plane ...
	I0916 23:49:04.956807  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:49:04.956925  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:49:04.956985  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:49:04.957219  522590 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:49:04.957368  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:49:04.957516  522590 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:49:04.957633  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:49:04.957703  522590 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:49:04.957908  522590 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:49:04.958044  522590 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:49:04.958151  522590 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.203651ms
	I0916 23:49:04.958278  522590 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:49:04.958374  522590 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:49:04.958531  522590 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:49:04.958637  522590 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:49:04.958758  522590 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.870805967s
	I0916 23:49:04.958876  522590 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.059203573s
	I0916 23:49:04.958980  522590 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002212231s
	I0916 23:49:04.959143  522590 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:49:04.959322  522590 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:49:04.959464  522590 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:49:04.959729  522590 kubeadm.go:310] [mark-control-plane] Marking the node addons-069011 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:49:04.959828  522590 kubeadm.go:310] [bootstrap-token] Using token: hth27u.vwd374r3m591cy8w
	I0916 23:49:04.961508  522590 out.go:252]   - Configuring RBAC rules ...
	I0916 23:49:04.961663  522590 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:49:04.961761  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:49:04.961918  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:49:04.962103  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:49:04.962249  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:49:04.962324  522590 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:49:04.962449  522590 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:49:04.962510  522590 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:49:04.962584  522590 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:49:04.962595  522590 kubeadm.go:310] 
	I0916 23:49:04.962677  522590 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:49:04.962687  522590 kubeadm.go:310] 
	I0916 23:49:04.962800  522590 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:49:04.962816  522590 kubeadm.go:310] 
	I0916 23:49:04.962858  522590 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:49:04.962957  522590 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:49:04.963031  522590 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:49:04.963041  522590 kubeadm.go:310] 
	I0916 23:49:04.963139  522590 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:49:04.963150  522590 kubeadm.go:310] 
	I0916 23:49:04.963217  522590 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:49:04.963226  522590 kubeadm.go:310] 
	I0916 23:49:04.963305  522590 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:49:04.963432  522590 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:49:04.963527  522590 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:49:04.963541  522590 kubeadm.go:310] 
	I0916 23:49:04.963668  522590 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:49:04.963778  522590 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:49:04.963792  522590 kubeadm.go:310] 
	I0916 23:49:04.963908  522590 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964060  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0916 23:49:04.964108  522590 kubeadm.go:310] 	--control-plane 
	I0916 23:49:04.964118  522590 kubeadm.go:310] 
	I0916 23:49:04.964224  522590 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:49:04.964234  522590 kubeadm.go:310] 
	I0916 23:49:04.964354  522590 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964531  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0916 23:49:04.964546  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:49:04.964565  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:49:04.966440  522590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:49:04.968135  522590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:49:04.972876  522590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:49:04.972901  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:49:04.992864  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:49:05.238639  522590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:49:05.238825  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.238851  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-069011 minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-069011 minikube.k8s.io/primary=true
	I0916 23:49:05.248222  522590 ops.go:34] apiserver oom_adj: -16
	I0916 23:49:05.324340  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.825316  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.324537  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.824724  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.325050  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.824729  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.325083  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.824525  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.324551  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.825331  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.895926  522590 kubeadm.go:1105] duration metric: took 4.65716259s to wait for elevateKubeSystemPrivileges
	I0916 23:49:09.895964  522590 kubeadm.go:394] duration metric: took 14.891511977s to StartCluster
	I0916 23:49:09.895989  522590 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896108  522590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:49:09.896612  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896807  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:49:09.896820  522590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:49:09.896883  522590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 23:49:09.897046  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.897061  522590 addons.go:69] Setting volcano=true in profile "addons-069011"
	I0916 23:49:09.897068  522590 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897082  522590 addons.go:238] Setting addon volcano=true in "addons-069011"
	I0916 23:49:09.897052  522590 addons.go:69] Setting yakd=true in profile "addons-069011"
	I0916 23:49:09.897090  522590 addons.go:69] Setting registry-creds=true in profile "addons-069011"
	I0916 23:49:09.897102  522590 addons.go:238] Setting addon yakd=true in "addons-069011"
	I0916 23:49:09.897112  522590 addons.go:238] Setting addon registry-creds=true in "addons-069011"
	I0916 23:49:09.897122  522590 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-069011"
	I0916 23:49:09.897128  522590 addons.go:69] Setting storage-provisioner=true in profile "addons-069011"
	I0916 23:49:09.897146  522590 addons.go:69] Setting volumesnapshots=true in profile "addons-069011"
	I0916 23:49:09.897161  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897169  522590 addons.go:69] Setting metrics-server=true in profile "addons-069011"
	I0916 23:49:09.897176  522590 addons.go:69] Setting cloud-spanner=true in profile "addons-069011"
	I0916 23:49:09.897178  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897047  522590 addons.go:69] Setting inspektor-gadget=true in profile "addons-069011"
	I0916 23:49:09.897165  522590 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897206  522590 addons.go:238] Setting addon cloud-spanner=true in "addons-069011"
	I0916 23:49:09.897216  522590 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-069011"
	I0916 23:49:09.897232  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897233  522590 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-069011"
	I0916 23:49:09.897264  522590 addons.go:238] Setting addon inspektor-gadget=true in "addons-069011"
	I0916 23:49:09.897181  522590 addons.go:238] Setting addon metrics-server=true in "addons-069011"
	I0916 23:49:09.897423  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897445  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897164  522590 addons.go:238] Setting addon volumesnapshots=true in "addons-069011"
	I0916 23:49:09.897586  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897092  522590 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-069011"
	I0916 23:49:09.897619  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-069011"
	I0916 23:49:09.897820  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897823  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897828  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897883  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897925  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897931  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.898010  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897153  522590 addons.go:238] Setting addon storage-provisioner=true in "addons-069011"
	I0916 23:49:09.898348  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897270  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897123  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.898989  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.899031  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897162  522590 addons.go:69] Setting registry=true in profile "addons-069011"
	I0916 23:49:09.899114  522590 addons.go:238] Setting addon registry=true in "addons-069011"
	I0916 23:49:09.899147  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897135  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897171  522590 addons.go:69] Setting default-storageclass=true in profile "addons-069011"
	I0916 23:49:09.899508  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-069011"
	I0916 23:49:09.897278  522590 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:09.899697  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897286  522590 addons.go:69] Setting ingress=true in profile "addons-069011"
	I0916 23:49:09.899882  522590 addons.go:238] Setting addon ingress=true in "addons-069011"
	I0916 23:49:09.899918  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897295  522590 addons.go:69] Setting gcp-auth=true in profile "addons-069011"
	I0916 23:49:09.899976  522590 mustload.go:65] Loading cluster: addons-069011
	I0916 23:49:09.897305  522590 addons.go:69] Setting ingress-dns=true in profile "addons-069011"
	I0916 23:49:09.900142  522590 addons.go:238] Setting addon ingress-dns=true in "addons-069011"
	I0916 23:49:09.900176  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.900346  522590 out.go:179] * Verifying Kubernetes components...
	I0916 23:49:09.902141  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:49:09.906029  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906489  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906586  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906921  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.907068  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909270  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909876  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.910613  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906032  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.966036  522590 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-069011"
	I0916 23:49:09.966110  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.966784  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	W0916 23:49:09.981981  522590 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 23:49:09.986930  522590 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0916 23:49:09.989771  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 23:49:09.989801  522590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 23:49:09.989878  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.990151  522590 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0916 23:49:09.991871  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 23:49:09.992484  522590 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0916 23:49:09.993934  522590 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:09.993954  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 23:49:09.994025  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.994418  522590 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:09.994431  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0916 23:49:09.994485  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.997452  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 23:49:09.997452  522590 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0916 23:49:10.001152  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 23:49:10.001192  522590 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.001229  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0916 23:49:10.001311  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.003359  522590 addons.go:238] Setting addon default-storageclass=true in "addons-069011"
	I0916 23:49:10.003429  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.003879  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:10.004609  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 23:49:10.006166  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 23:49:10.007322  522590 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0916 23:49:10.008643  522590 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 23:49:10.008663  522590 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.008684  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 23:49:10.008820  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 23:49:10.008829  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.010190  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 23:49:10.010220  522590 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 23:49:10.010294  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.012486  522590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:49:10.012564  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 23:49:10.014826  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.014910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:49:10.015167  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.016771  522590 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0916 23:49:10.018372  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.018418  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0916 23:49:10.018493  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 23:49:10.018494  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.019739  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 23:49:10.019764  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 23:49:10.019840  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.023104  522590 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0916 23:49:10.023240  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0916 23:49:10.024340  522590 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 23:49:10.024365  522590 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0916 23:49:10.024441  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.025784  522590 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0916 23:49:10.025900  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.027422  522590 out.go:179]   - Using image docker.io/registry:3.0.0
	I0916 23:49:10.029503  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.032360  522590 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 23:49:10.032382  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 23:49:10.032451  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.032643  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.037094  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.038113  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.038152  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 23:49:10.038221  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.058927  522590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.058950  522590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:49:10.059009  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.063705  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 23:49:10.066747  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 23:49:10.066781  522590 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 23:49:10.066937  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.067231  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.069660  522590 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 23:49:10.072852  522590 out.go:179]   - Using image docker.io/busybox:stable
	I0916 23:49:10.077706  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.077738  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 23:49:10.077812  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.081171  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099594  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099601  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.101679  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.103303  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.109277  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.113014  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:49:10.114406  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.114692  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:49:10.116962  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.132677  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.135654  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.137795  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.144377  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.149192  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.245816  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 23:49:10.245838  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 23:49:10.253803  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:10.256108  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.265944  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:10.288794  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 23:49:10.288827  522590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 23:49:10.291276  522590 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.291301  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0916 23:49:10.298027  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.301761  522590 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 23:49:10.301815  522590 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 23:49:10.303881  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 23:49:10.303906  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 23:49:10.307619  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.321011  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.321513  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 23:49:10.321533  522590 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 23:49:10.335228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.342628  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.353105  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.360830  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 23:49:10.360864  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 23:49:10.366097  522590 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.366124  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 23:49:10.368966  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.368997  522590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 23:49:10.374870  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 23:49:10.374897  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 23:49:10.383228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.419473  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 23:49:10.419505  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 23:49:10.420148  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 23:49:10.420173  522590 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 23:49:10.431466  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 23:49:10.431495  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 23:49:10.431508  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.447520  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.491601  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 23:49:10.491635  522590 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 23:49:10.495666  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 23:49:10.495699  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 23:49:10.522266  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 23:49:10.522304  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 23:49:10.608119  522590 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:49:10.610081  522590 node_ready.go:35] waiting up to 6m0s for node "addons-069011" to be "Ready" ...
	I0916 23:49:10.613978  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.614095  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 23:49:10.619888  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 23:49:10.619918  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 23:49:10.636272  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 23:49:10.636303  522590 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 23:49:10.689230  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.705272  522590 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.705297  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 23:49:10.708368  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 23:49:10.708557  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 23:49:10.788275  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 23:49:10.788306  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 23:49:10.806501  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.869607  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 23:49:10.869632  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 23:49:10.937889  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 23:49:10.937914  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 23:49:11.002071  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.002102  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 23:49:11.047895  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.130142  522590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-069011" context rescaled to 1 replicas
	I0916 23:49:11.643350  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.290178117s)
	I0916 23:49:11.643439  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.30078278s)
	I0916 23:49:11.643452  522590 addons.go:479] Verifying addon ingress=true in "addons-069011"
	I0916 23:49:11.643582  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.212051777s)
	I0916 23:49:11.643613  522590 addons.go:479] Verifying addon registry=true in "addons-069011"
	I0916 23:49:11.643522  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.260251451s)
	I0916 23:49:11.643722  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.196160875s)
	W0916 23:49:11.643735  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.643740  522590 addons.go:479] Verifying addon metrics-server=true in "addons-069011"
	I0916 23:49:11.643761  522590 retry.go:31] will retry after 298.602868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.646501  522590 out.go:179] * Verifying registry addon...
	I0916 23:49:11.646501  522590 out.go:179] * Verifying ingress addon...
	I0916 23:49:11.646504  522590 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-069011 service yakd-dashboard -n yakd-dashboard
	
	I0916 23:49:11.652191  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 23:49:11.652206  522590 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 23:49:11.655147  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:11.655173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:11.655271  522590 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 23:49:11.655299  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:11.943533  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:12.143203  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.336408881s)
	W0916 23:49:12.143280  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143297  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095362374s)
	I0916 23:49:12.143318  522590 retry.go:31] will retry after 271.042655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143322  522590 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:12.145833  522590 out.go:179] * Verifying csi-hostpath-driver addon...
	I0916 23:49:12.148236  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 23:49:12.153014  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:12.153041  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.157053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.157321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.415287  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0916 23:49:12.575627  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:12.575662  522590 retry.go:31] will retry after 298.950278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:12.614105  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:12.652906  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.655120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.655721  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.875699  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:13.152262  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.155946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.156155  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:13.653200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.655268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.655558  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.152741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.154674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.154869  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.651414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.654802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.929904  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51454475s)
	I0916 23:49:14.929925  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.05417803s)
	W0916 23:49:14.929968  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:14.929993  522590 retry.go:31] will retry after 724.402782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:15.113335  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:15.152058  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.155353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.155409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:15.651139  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.655103  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:15.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.152053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.155268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.155481  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:16.233482  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.233517  522590 retry.go:31] will retry after 528.645422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.654976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.763126  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:17.152861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.155374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:17.346237  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:17.346292  522590 retry.go:31] will retry after 1.241721728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:17.613291  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:17.637138  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 23:49:17.637240  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.651912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.655594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:17.659459  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.770859  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 23:49:17.790444  522590 addons.go:238] Setting addon gcp-auth=true in "addons-069011"
	I0916 23:49:17.790517  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:17.790880  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:17.810255  522590 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 23:49:17.810334  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.829504  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.924366  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:17.925772  522590 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0916 23:49:17.926989  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 23:49:17.927016  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 23:49:17.947928  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 23:49:17.947963  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 23:49:17.968887  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:17.968910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 23:49:17.988471  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:18.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.155501  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.155799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.360333  522590 addons.go:479] Verifying addon gcp-auth=true in "addons-069011"
	I0916 23:49:18.361695  522590 out.go:179] * Verifying gcp-auth addon...
	I0916 23:49:18.364169  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 23:49:18.367024  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 23:49:18.367044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:18.588324  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:18.652355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.654775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.655329  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.867741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:19.151755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.154903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.154930  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:19.161345  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.161383  522590 retry.go:31] will retry after 2.165570319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:19.614026  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:19.652152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.655765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.655827  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:19.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.151387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.154666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.154897  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.368600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.651411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.655000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.655011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.868027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.151730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.155244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.155464  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.327698  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:21.367411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.650905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.655659  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.655769  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.867968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:21.902069  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:21.902100  522590 retry.go:31] will retry after 1.920767743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:22.113269  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:22.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.154840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:22.651563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.655020  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.868412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.151599  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.155033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.155245  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.367616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.651422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.654714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.654854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.823078  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:23.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.113772  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:24.152012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.155306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.155536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.367843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.396574  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.396608  522590 retry.go:31] will retry after 5.249600328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.651892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.655528  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.868048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.152228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.155056  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.368598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.651661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.655231  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.655269  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.867507  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:26.151287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.155745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.155923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.368083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:26.612894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:26.652086  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.655500  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.867894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.151727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.155077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.368077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.652080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.655544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:28.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.155039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.155194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.367271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:28.613247  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:28.652605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.654553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.868444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.151120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.155325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.155404  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.367903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.646635  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:29.651947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.655369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.655591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.868090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.151994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:30.222879  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.222909  522590 retry.go:31] will retry after 6.679975361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.368039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.655141  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.655354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:30.867036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:31.112894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:31.151818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.155291  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.367578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:31.651196  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.655723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.655764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.867818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.152173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.155965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.156115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.367078  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.652733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.655287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.655347  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.867604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:33.113866  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:33.151850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.155462  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.367548  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:33.651173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.655487  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.655550  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.151692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.154752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.154822  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.367980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.652127  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.655730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.868271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:35.151839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.155765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.155925  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.368376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:35.613366  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:35.651791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.655929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.656002  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.868276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.152007  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.155379  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.367593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.652140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.655627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.655826  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.903759  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:37.152322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.155245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.367621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:37.484516  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:37.484552  522590 retry.go:31] will retry after 4.853736845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:37.613755  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:37.651588  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.654987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.655126  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.867377  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.154847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.155074  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.651724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.655174  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.867641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:39.151291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:39.613957  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:39.652056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.655324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.655427  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.155213  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.155515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.367629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.652268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.655504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.655716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.867786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.151908  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.155026  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.155219  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.367009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.652274  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.654993  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.868497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.113784  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:42.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.156178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.156253  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.339312  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:42.368085  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:42.653863  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.656534  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.656609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.867016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.931965  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:42.932013  522590 retry.go:31] will retry after 9.201032876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:43.151738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.155452  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.157165  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.367931  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:43.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.655792  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.655791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.868283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:44.151192  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.155952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.156077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.368187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:44.612897  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:44.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.655165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.867416  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.155365  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.155527  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.367088  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.652905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.655224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.655382  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.867470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:46.152562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.155553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.155698  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.367899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:46.613967  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:46.652183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.867721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.155062  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.155242  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.367292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.652156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.655812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.656147  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.867423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.152152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.155526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.155678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.367871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.651966  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.655104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.655456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:49.113864  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:49.151422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.154601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.154659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.368059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:49.651895  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.655081  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.655227  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.867193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.154433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.154532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.367752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.614048  522590 node_ready.go:49] node "addons-069011" is "Ready"
	I0916 23:49:50.614124  522590 node_ready.go:38] duration metric: took 40.004018622s for node "addons-069011" to be "Ready" ...
	I0916 23:49:50.614142  522590 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:49:50.614260  522590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:49:50.634002  522590 api_server.go:72] duration metric: took 40.737149121s to wait for apiserver process to appear ...
	I0916 23:49:50.634037  522590 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:49:50.634066  522590 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:49:50.639530  522590 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:49:50.640709  522590 api_server.go:141] control plane version: v1.34.0
	I0916 23:49:50.640743  522590 api_server.go:131] duration metric: took 6.69752ms to wait for apiserver health ...
	I0916 23:49:50.640754  522590 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:49:50.645035  522590 system_pods.go:59] 20 kube-system pods found
	I0916 23:49:50.645109  522590 system_pods.go:61] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.645119  522590 system_pods.go:61] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.645126  522590 system_pods.go:61] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.645131  522590 system_pods.go:61] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.645134  522590 system_pods.go:61] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.645138  522590 system_pods.go:61] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.645141  522590 system_pods.go:61] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.645146  522590 system_pods.go:61] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.645150  522590 system_pods.go:61] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.645156  522590 system_pods.go:61] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.645165  522590 system_pods.go:61] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.645171  522590 system_pods.go:61] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.645182  522590 system_pods.go:61] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.645192  522590 system_pods.go:61] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.645206  522590 system_pods.go:61] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.645211  522590 system_pods.go:61] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.645217  522590 system_pods.go:61] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.645222  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645231  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645238  522590 system_pods.go:61] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.645253  522590 system_pods.go:74] duration metric: took 4.491675ms to wait for pod list to return data ...
	I0916 23:49:50.645267  522590 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:49:50.649832  522590 default_sa.go:45] found service account: "default"
	I0916 23:49:50.649863  522590 default_sa.go:55] duration metric: took 4.587184ms for default service account to be created ...
	I0916 23:49:50.649876  522590 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:49:50.651240  522590 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:50.651263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.653416  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.653453  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.653463  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.653471  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.653478  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.653507  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.653517  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.653523  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.653531  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.653541  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.653553  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.653564  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.653570  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.653577  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.653586  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.653604  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.653610  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.653621  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.653630  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653641  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653649  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.653671  522590 retry.go:31] will retry after 286.454663ms: missing components: kube-dns
	I0916 23:49:50.654669  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:50.654689  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.655263  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.970963  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.971008  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.971021  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.971032  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:50.971040  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:50.971049  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:50.971060  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.971067  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.971075  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.971081  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.971093  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.971098  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.971107  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.971115  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.971127  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.971139  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.971149  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:50.971487  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.971519  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971529  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971537  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.971560  522590 retry.go:31] will retry after 250.710433ms: missing components: kube-dns
	I0916 23:49:51.152661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.154830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.154922  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.227146  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.227184  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.227191  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:51.227200  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.227206  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.227213  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.227219  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.227223  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.227226  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.227230  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.227235  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.227241  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.227244  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.227250  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.227256  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.227261  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.227265  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.227272  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.227277  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227286  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227292  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:51.227310  522590 retry.go:31] will retry after 293.334556ms: missing components: kube-dns
	I0916 23:49:51.368304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:51.526481  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.526535  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.526545  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Running
	I0916 23:49:51.526559  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.526572  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.526582  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.526589  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.526595  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.526601  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.526608  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.526618  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.526623  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.526629  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.526635  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.526645  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.526690  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.526699  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.526714  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.526722  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526731  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526737  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Running
	I0916 23:49:51.526755  522590 system_pods.go:126] duration metric: took 876.872082ms to wait for k8s-apps to be running ...
	I0916 23:49:51.526767  522590 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:49:51.526834  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:49:51.543571  522590 system_svc.go:56] duration metric: took 16.790922ms WaitForService to wait for kubelet
	I0916 23:49:51.543604  522590 kubeadm.go:578] duration metric: took 41.646760707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:49:51.543633  522590 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:49:51.546804  522590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:49:51.546832  522590 node_conditions.go:123] node cpu capacity is 8
	I0916 23:49:51.546851  522590 node_conditions.go:105] duration metric: took 3.210939ms to run NodePressure ...
	I0916 23:49:51.546866  522590 start.go:241] waiting for startup goroutines ...
	I0916 23:49:51.653201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.655460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.655502  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.867905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.133215  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:52.152421  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.155318  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:52.367901  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.651612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:52.780604  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.780644  522590 retry.go:31] will retry after 11.236841486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.867960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.152499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.155690  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.369120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.653294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.655366  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.867612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.152263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.154786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.154825  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.368535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.651809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.655532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.655654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.868318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.152216  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.154997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.155198  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.368885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.652607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.868072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.153735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.155961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.156369  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.367288  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.654554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.654654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.867827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.152232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.154799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.154814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.368344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.651690  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.655166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.655327  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.867912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.152149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.155593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.155720  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.367868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.654817  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.867989  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.154848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.154899  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.368414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.651849  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.655048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.655193  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.866961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.152429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.154913  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.154932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.367821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.652008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.655477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.867460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.152318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.155248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.155323  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.651746  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.655519  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.655601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.867766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.152212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.154600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.154831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.367336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.655315  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.655331  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.154749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.154818  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.651319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.655739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.655966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.868159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:04.018435  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:04.151970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.155986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.156204  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.367594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:04.598781  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.598815  522590 retry.go:31] will retry after 23.829016694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.655382  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.867585  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.151943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.367838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.652819  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.868265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.151902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.155278  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.367335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.651933  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.655376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.867544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.151927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.155463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.155566  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.367946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.652554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.655150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.655250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.867104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.154867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.154932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.367820  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.652108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.655674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.867488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.151318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.155660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.155771  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.368018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.652352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.867979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.154744  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.367888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.652342  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.868023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.152284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.154741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.154823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.368224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.651602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.654730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.655430  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.152453  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.155233  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.367898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.652236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.654831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.654839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.868375  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.151282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.155678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.155786  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.368346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.652132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.655641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.655658  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.867735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.152048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.155624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.367645  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.651952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.655433  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.867300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.151804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.155275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.155321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.367103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.651754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.655590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.655740  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.868629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.155556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.155585  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.367279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.651583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.655042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.867499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.151753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.154889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.368258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.651448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.655920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.655988  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.868165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.155157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.368301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.654851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.655022  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.868093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.154885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.154951  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.368636  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.651987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.655509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.655549  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.867433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.154985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.155048  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.368109  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.651638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.654894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.654923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.867870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.155357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.155505  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.368035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.652897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.656101  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.656100  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.152943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.155198  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.367576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.655870  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.867990  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.152723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.155609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.155624  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.653531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.655283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.867298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.151888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.155832  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.155956  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.373346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.652179  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.655942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.656079  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.867787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.152745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.156266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.156485  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.367952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.655819  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.867860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.153299  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.155510  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.155645  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.367671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.655448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.655652  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.867254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.151981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.156009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.156850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.367744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.654351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.656634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.656737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.868098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.153435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.156745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.156944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.367835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.428940  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:28.651949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.655492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.655714  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.866833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:29.128531  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.128569  522590 retry.go:31] will retry after 40.39789771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.154066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.156666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.156872  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.367799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:29.652238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.654780  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.655095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.867922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.152458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.155006  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.155093  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.367812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.652850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.867340  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.151917  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.155386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.155417  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.367531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.653268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.657791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.657831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.868270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.155469  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.157902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.158614  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.368334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.656126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.656171  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.155033  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.156187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.366965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.655162  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.655350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.868673  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.152675  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.155008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.155063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.368239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.655185  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.867899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.152626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.155359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.155446  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.367305  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.652378  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.655807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.868004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.152291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.155228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.155274  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.367904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.652666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.655056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.868245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.153660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.155936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.156021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.367947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.652965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.654916  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.654970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.867352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.152079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.155581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.155593  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.652943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.655717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.868640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.152316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.155082  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.155138  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.368233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.651993  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.654885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.655026  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.868217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.152059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.155525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.155590  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.367907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.152251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.154763  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.367545  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.654751  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.868012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.154862  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.154889  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.368681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.652243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.654497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.654707  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.152560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.156124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.156157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.367649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.652430  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.654968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.654986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.867477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.151715  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.154833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.154926  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.368003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.652097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.655411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.655482  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.151785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.155294  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.367710  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.652316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.654798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.654835  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.867771  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.151940  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.155638  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.367470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.652017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.655632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.655678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.152166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.155566  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.155778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.653210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.655490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.655647  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.867856  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.152084  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.155486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.155488  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.367425  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.654912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.151097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.155642  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.155716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.652527  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.654528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.654540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.867508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.152341  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.367631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.651795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.654967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.655191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.867951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.155228  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.368136  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.654278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.658434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.658602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.867554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.151825  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.154981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.155043  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.368227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.651587  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.654841  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.868253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.151568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.154906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.368332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.652244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.654695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.654772  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.867872  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.152199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.155137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.367783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.652699  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.654783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.654979  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.868132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.152259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.154768  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.367668  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.652881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.655002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.655049  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.868381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.151518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.367620  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.651888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.655083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.655175  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.868708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.152144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.155438  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.155487  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.367472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.652234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.654836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.654874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.867903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.152561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.154532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.154668  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.367739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.655541  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.867577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.155130  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.368654  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.652953  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.654943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.654982  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.868114  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.151581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.155143  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.368473  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.651816  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.655282  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.867147  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.151121  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.155456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.367218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.651621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.654783  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.152018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.155576  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.367896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.655222  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.655273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.867265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.151348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.156159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.156250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.367497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.652167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.655608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.655715  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.867725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.155471  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.155479  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.367579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.652472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.867055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.153048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.155508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.155556  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.367853  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.653083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.655046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.655090  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.867138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.152134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.155674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.367789  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.652335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.654809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.654932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.868697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.152531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.154911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.154955  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.370805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.652428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.654916  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.868557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.151860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.155090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.155145  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.367368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.651698  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.654852  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.868069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.151519  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.154937  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.154942  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.368515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.526750  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:51:09.652541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.655572  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.655659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.868054  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:51:10.098163  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:51:10.098324  522590 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0916 23:51:10.152880  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.154875  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.367834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:10.652251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.655021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.655084  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.867384  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.151842  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.155150  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.368186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.652269  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.655256  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.867128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.152667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.155107  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.652518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.654870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.867312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.151982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.155271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.155332  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.367823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.652387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.654951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.868844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.153334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.155643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.155904  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.368482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.652515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.655724  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.152601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.155604  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.652539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.655836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.655906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.868440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.151573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.154807  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.368168  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.652042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.655560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.655747  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.151965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.155140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.155210  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.368464  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.652037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.655823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.867935  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.152022  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.155517  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.367482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.651927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.654865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.655024  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.868282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.151370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.155878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.155924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.651943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.868827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.151845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.155066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.155072  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.369339  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.651811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.654774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.654963  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.867983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.152276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.154893  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.367794  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.652538  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.654934  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.654939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.867898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.151949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.155295  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.155445  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.367407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.651590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.655019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.867887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.152190  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.155502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.155545  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.367753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.652562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.654651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.152073  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.155610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.367957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.868057  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.152408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.155409  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.155602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.652052  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.655209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.655312  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.151535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.155823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.155856  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.651651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.654990  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.867537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.152091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.155112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.155142  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.654137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.656355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.656515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.869096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.154581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.154673  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.367987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.652294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.654753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.654853  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.869651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.154807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.154850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.368887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.654241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.655196  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.151919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.155232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.155296  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.367463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.867385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.151552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.154871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.154947  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.369090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.652787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.654631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.869965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.152268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.154797  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.154858  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.368137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.654729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.654778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.868357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.151932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.155182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.155339  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.367560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.651975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.655413  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.867981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.152479  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.155002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.155059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.368688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.651549  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.655000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.655063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.868189  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.151809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.155205  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.155350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.367322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.651627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.752333  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.752426  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.868016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.155466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.368191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.654883  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.868252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.152153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.155806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.155969  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.368131  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.652021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.655754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.655968  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.869697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.152009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.155144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.155151  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.369995  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.652185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.655536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.655553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.867639  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.151740  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.154964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.155029  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.368608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.651802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.654961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.869716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.152077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.155323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.155354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.367481  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.651750  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.655154  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.867047  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.152227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.154790  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.154936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.367727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.655578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.655618  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.869685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.152239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.154748  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.367986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.654796  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.868157  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.151984  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.155093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.155268  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.367574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.652278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.867108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.151635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.155169  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.367632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.656348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.656416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.867492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.155082  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.368046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.652581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.655278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.655440  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.867304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.151985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.155139  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.367275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.652201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.654659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.654708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.867813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.152102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.368132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.652347  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.654903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.654929  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.868615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.151762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.154894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.155015  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.367728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.652716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.655105  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.655114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.867844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.151899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.367647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.651960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.655182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.867701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.152323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.368036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.652752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.655140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.867998  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.152002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.155125  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.155152  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.652049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.655522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.655726  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.868294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.151791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.155565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.367865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.652161  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.655672  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.868579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.151650  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.154924  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.155034  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.369092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.651132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.655513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.655522  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.868691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.152450  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.155354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.155524  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.367600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.651882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.655373  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.655408  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.867056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.152214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.154682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.154691  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.367828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.652289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.654838  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.654919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.868482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.155680  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.367605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.652000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.655628  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.867754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.152556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.155095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.367975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.654741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.868401  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.153486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.155941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.156005  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.652886  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.654744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.867833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.152068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.155056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.368282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.651560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.654879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.654906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.868124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.151834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.368228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.654864  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.655039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.152355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.155216  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.155250  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.367206  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.651490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.655688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.655736  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.868528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.152001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.155683  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.367787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.652284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.654849  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.868355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.151870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.155589  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.369165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.655412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.655514  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.867952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.152595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.154738  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.154768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.368177  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.651492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.654766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.654890  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.867847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.155407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.155591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.367682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.652426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.655066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.655077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.868692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.151879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.154999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.368983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.652433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.655105  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.867405  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.151744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.651596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.654914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.655059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.868458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.152215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.154616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.154655  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.367845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.652783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.655112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.655120  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.151544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.155208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.155226  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.367504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.652199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.152537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.155961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.155972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.652499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.655560  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.153765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.156270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.156301  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.367137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.652938  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.655212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.655254  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.867526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.152762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.155539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.155611  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.367745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.653490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.655575  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.867930  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.152233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.154692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.154928  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.368718  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.655076  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.868860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.152353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.154742  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.367623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.655140  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.867455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.151851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.155247  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.367164  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.652193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.655452  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.655496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.867913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.152181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.155667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.155764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.368289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.651762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.654913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.654985  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.868273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.152523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.155730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.156762  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.369278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.653153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.656847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.656957  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.872367  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.152950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.155208  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.368554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.652083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.656110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.656132  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.867845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.152657  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.155336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.155360  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.367646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.652603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.655013  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.655062  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.868632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.151907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.155327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.155416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.367287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.651614  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.654920  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.867932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.155722  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.367894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.652307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.654756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.654995  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.869050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.151999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.155129  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.155241  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.367234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.655801  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.867063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.152370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.368226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.651514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.654966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.654979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.867379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.152074  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.155478  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.155627  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.367613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.651861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.655241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.655314  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.867408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.151695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.155047  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.368563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.655425  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.867208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.151957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.156991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.157177  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.367383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.651982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.655413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.655465  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.867368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.151925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.154970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.155019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.368160  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.651611  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.654847  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.654859  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.867942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.152874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.154630  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.154694  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.368049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.651257  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.655624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.655667  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.867801  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.152524  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.156020  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.156108  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.651663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.655003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.655207  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.867344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.152248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.154952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.155114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.368836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.652345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.868484  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.151558  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.154855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.154863  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.368442  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.651568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.655180  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.868266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.151815  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.155240  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.367272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.651711  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.655134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.655194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.867490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.151598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.155259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.367609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.651854  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.655324  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.867858  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.153080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.155098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.155341  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.367674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.651945  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.655335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.655353  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.151897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.155637  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.155683  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.367456  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.652090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.655528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.655648  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.152606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.154994  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.368455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.652303  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.655073  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.867363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.151724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.155569  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.367351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.651839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.655606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.868338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.152142  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.155217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.155532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.368358  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.651898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.655540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.655567  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.868334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.154861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.154907  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.368768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.652068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.655573  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.869959  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.152619  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.154675  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.367925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.868289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.152483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.154991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.155032  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.368359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.651646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.655296  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.867137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.152187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.155835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.155854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.367912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.652016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.655327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.867319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.151608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.154828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.155016  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.368488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.653811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.656445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.656565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.867120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.152791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.154576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.154723  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.367602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.651437  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.655676  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.867828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.152180  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.155737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.155763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.367992  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.652246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.654603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.868092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.152800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.154702  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.154910  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.367595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.654693  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.867547  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.151877  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.155211  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.367273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.651756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.655367  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.867318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.151786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.155034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.155115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.651521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.655726  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.655766  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.868163  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.151496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.155243  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.366955  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.652531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.655173  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.655184  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.867097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.152201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.155505  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.155636  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.367562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.651843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.655301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.655384  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.868028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.152914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.155252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.155462  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.367149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.651713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.655354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.655450  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.867440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.151891  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.368461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.901721  522590 kapi.go:107] duration metric: took 3m34.537544348s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 23:52:52.906543  522590 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-069011 cluster.
	I0916 23:52:52.912324  522590 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 23:52:52.913737  522590 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 23:52:53.153197  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:53.155666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.652828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.655014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.655110  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.152324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.155476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.155496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.655581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.655609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.152128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.155885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.156039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.652641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.654978  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.152674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.154874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.155000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.652035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.655457  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.655496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:57.152186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.155542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:57.155561  522590 kapi.go:107] duration metric: took 3m45.503354476s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 23:52:57.652350  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.655498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.154850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.652665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.152543  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.154283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.653277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.659941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.152852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.154649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.652327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.654800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.154525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.651817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.655138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.653502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.656037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.151857  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.155055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.652334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.152174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.155870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.653124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.153568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.155625  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.653230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.655236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.152361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.154928  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.653059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.656200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.152336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.652346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.655712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.653610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.152628  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.154934  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.655144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.154348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.155986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.652369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.152148  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.155670  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.652553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.655243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.152796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.155106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.651747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.655634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.153010  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.155374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.654738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.656482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.152952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.652523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.152364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.155721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.655954  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.656795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.152967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.154926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.653027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.153039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.653034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.156123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.651828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.151648  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.652222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.654551  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.655101  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.651672  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.655009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.152329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.652063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.655272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.152182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.155422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.652218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.654560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.152574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.155253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.652502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.151663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.155115  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.655044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.152383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.155509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.652354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.654747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.169011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.169001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.653424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.655714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.152979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.254144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.651804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.655470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.151827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.155108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.652422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.152193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.155976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.652210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.654980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.151709  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.155038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.651589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.655050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.151868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.155145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.652363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.655892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.151643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.154810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.653583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.655279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.153153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.155522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.652584  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.151580  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.156561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.652732  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.655133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.155361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.158601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.652275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.654674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.153755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.155714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.652926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.151466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.154733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.653313  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.655745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.152234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.155638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.652445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.654541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.152461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.652312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.654686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.155170  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.651644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.152309  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.154360  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.654550  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.151904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.154960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.652091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.655542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.151570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.652708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.654522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.151593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.154608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.651922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.151376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.155482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.151782  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.154824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.652429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.152137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.154936  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.651792  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.654929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.152207  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.652077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.655059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.152055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.155283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.654677  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.152004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.154803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.653046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.654923  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.154978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.651950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.654986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.151595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.154725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.652661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.654540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.155079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.652239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.654476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.151772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.155226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.655124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.151415  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.155604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.652777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.152275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.155829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.653025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.152978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.154716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.652635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.152070  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.155270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.652577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.655424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.152756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.154426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.651964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.655181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.151369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.155561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.651593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.654586  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.152252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.654423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.152030  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.155167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.651855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.654881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.151556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.654500  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.152255  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.154344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.652483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.655325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.151729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.154664  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.652904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.654681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.152267  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.652291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.151577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.154865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.654618  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.152302  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.154688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.653092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.654963  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.151758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.154735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.652999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.154498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.654909  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.151298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.155557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.652643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.654491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.152751  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.652126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.655183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.151763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.155046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.152658  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.154758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.652985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.655060  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.151705  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.154775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.652773  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.654589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.152592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.155097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.651889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.152217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.652903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.152686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.154506  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.652260  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.654251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.154777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.652915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.152381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.155278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.651555  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.152695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.652919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.151929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.155096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.652215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.654600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.152243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.154806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.655336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.151915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.154836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.152467  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.653379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.655466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.151800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.155291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.653102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.153140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.654838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.153210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.155329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.152491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.154729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.653037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.654741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.152830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.154474  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.652230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.654509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.151920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.154827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.653191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.655219  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.151306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.155960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.651717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.655110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.152304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.154575  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.652514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.654778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.652961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.654330  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.655691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.152418  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.154851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.651435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.654582  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.153087  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.155042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.654583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.152997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.154432  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.652600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.152066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.651875  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.655064  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.152238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.154411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.655370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.152256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.154799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.652896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.655256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.152778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.154615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.652772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.654597  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.152798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.155091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.652248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.654728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.152282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.154468  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.652120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.655482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.151671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.653242  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.654823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.152812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.152839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.155119  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.652214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.654840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.152996  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.155254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.651623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.153897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.155803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.652443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.654867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.152374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.154640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.653033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.654888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.152649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.154604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.652521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.654615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.152209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.154579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.652590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.654414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.651951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.655307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.151878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.651739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.654805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.152326  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.154364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.654812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.152821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.154939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.651434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.152103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.155132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.655072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.154539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.155149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.654796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.151638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.154787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.652885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.152069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.652069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.655407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.152172  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.156173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.652301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.654808  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.153293  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.155684  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.652844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.654749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.652609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.151757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.652511  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.654688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.152258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.154829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.653049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.151579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.154591  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.652331  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.654994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.151784  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.154921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.655067  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.151900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.155072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.651978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.655300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.151961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.154914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.654644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.152090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.155188  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.652025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.652821  522590 kapi.go:107] duration metric: took 6m0.000625805s to wait for kubernetes.io/minikube-addons=registry ...
	W0916 23:55:11.652991  522590 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0916 23:55:12.148606  522590 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0916 23:55:12.148655  522590 kapi.go:107] duration metric: took 6m0.000415083s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0916 23:55:12.148771  522590 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0916 23:55:12.151062  522590 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, volumesnapshots, gcp-auth, ingress
	I0916 23:55:12.152575  522590 addons.go:514] duration metric: took 6m2.25568849s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher cloud-spanner metrics-server yakd volumesnapshots gcp-auth ingress]
	I0916 23:55:12.152638  522590 start.go:246] waiting for cluster config update ...
	I0916 23:55:12.152661  522590 start.go:255] writing updated cluster config ...
	I0916 23:55:12.152955  522590 ssh_runner.go:195] Run: rm -f paused
	I0916 23:55:12.157549  522590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:12.161141  522590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.165703  522590 pod_ready.go:94] pod "coredns-66bc5c9577-m872b" is "Ready"
	I0916 23:55:12.165731  522590 pod_ready.go:86] duration metric: took 4.567019ms for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.168067  522590 pod_ready.go:83] waiting for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.172550  522590 pod_ready.go:94] pod "etcd-addons-069011" is "Ready"
	I0916 23:55:12.172583  522590 pod_ready.go:86] duration metric: took 4.489308ms for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.174872  522590 pod_ready.go:83] waiting for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.179401  522590 pod_ready.go:94] pod "kube-apiserver-addons-069011" is "Ready"
	I0916 23:55:12.179432  522590 pod_ready.go:86] duration metric: took 4.532992ms for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.181473  522590 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.561817  522590 pod_ready.go:94] pod "kube-controller-manager-addons-069011" is "Ready"
	I0916 23:55:12.561846  522590 pod_ready.go:86] duration metric: took 380.349392ms for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.763149  522590 pod_ready.go:83] waiting for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.161850  522590 pod_ready.go:94] pod "kube-proxy-v85kq" is "Ready"
	I0916 23:55:13.161880  522590 pod_ready.go:86] duration metric: took 398.696904ms for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.362802  522590 pod_ready.go:83] waiting for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761895  522590 pod_ready.go:94] pod "kube-scheduler-addons-069011" is "Ready"
	I0916 23:55:13.761929  522590 pod_ready.go:86] duration metric: took 399.094008ms for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761944  522590 pod_ready.go:40] duration metric: took 1.604356273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:13.810173  522590 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:55:13.812279  522590 out.go:179] * Done! kubectl is now configured to use "addons-069011" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:04:04 addons-069011 crio[933]: time="2025-09-17 00:04:04.313442264Z" level=info msg="Stopping pod sandbox: 843001c23149aa0e1efefa67869bb66590b8abb5d80215be357389aa30692adc" id=699c0af6-5d58-44e9-a08c-c6110d7bd690 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 00:04:04 addons-069011 crio[933]: time="2025-09-17 00:04:04.313493037Z" level=info msg="Stopped pod sandbox (already stopped): 843001c23149aa0e1efefa67869bb66590b8abb5d80215be357389aa30692adc" id=699c0af6-5d58-44e9-a08c-c6110d7bd690 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 00:04:04 addons-069011 crio[933]: time="2025-09-17 00:04:04.313788678Z" level=info msg="Removing pod sandbox: 843001c23149aa0e1efefa67869bb66590b8abb5d80215be357389aa30692adc" id=1bc23d65-aefb-4ade-acdb-806ef7028a29 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 17 00:04:04 addons-069011 crio[933]: time="2025-09-17 00:04:04.320934163Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 17 00:04:04 addons-069011 crio[933]: time="2025-09-17 00:04:04.320983096Z" level=info msg="Removed pod sandbox: 843001c23149aa0e1efefa67869bb66590b8abb5d80215be357389aa30692adc" id=1bc23d65-aefb-4ade-acdb-806ef7028a29 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 17 00:04:08 addons-069011 crio[933]: time="2025-09-17 00:04:08.710381711Z" level=info msg="Pulling image: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=2d36cb7a-b5c7-4eb6-91e9-0645cfc3aea2 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:04:08 addons-069011 crio[933]: time="2025-09-17 00:04:08.713356402Z" level=info msg="Trying to access \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\""
	Sep 17 00:04:08 addons-069011 crio[933]: time="2025-09-17 00:04:08.800315527Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e2ab2f8e-6031-468c-8803-6d380766c2a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:08 addons-069011 crio[933]: time="2025-09-17 00:04:08.800584101Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=e2ab2f8e-6031-468c-8803-6d380766c2a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:11 addons-069011 crio[933]: time="2025-09-17 00:04:11.174912662Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=fc47526c-e456-4269-b737-9e5a15eddf11 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:11 addons-069011 crio[933]: time="2025-09-17 00:04:11.175178315Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=fc47526c-e456-4269-b737-9e5a15eddf11 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:13 addons-069011 crio[933]: time="2025-09-17 00:04:13.174160682Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b70879a8-b08a-42f8-96bf-910276f1bbcc name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:13 addons-069011 crio[933]: time="2025-09-17 00:04:13.174468066Z" level=info msg="Image docker.io/nginx:alpine not found" id=b70879a8-b08a-42f8-96bf-910276f1bbcc name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:23 addons-069011 crio[933]: time="2025-09-17 00:04:23.174476551Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=4719afc7-8a70-41cf-9cf4-cf7230201fa4 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:23 addons-069011 crio[933]: time="2025-09-17 00:04:23.174815774Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=4719afc7-8a70-41cf-9cf4-cf7230201fa4 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:25 addons-069011 crio[933]: time="2025-09-17 00:04:25.174857486Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=df960967-72d2-42d7-bcbb-6a3655c983be name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:25 addons-069011 crio[933]: time="2025-09-17 00:04:25.174865327Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cb9b855f-53ab-491d-88e1-41306c1ed5e6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:25 addons-069011 crio[933]: time="2025-09-17 00:04:25.175112239Z" level=info msg="Image docker.io/nginx:alpine not found" id=cb9b855f-53ab-491d-88e1-41306c1ed5e6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:25 addons-069011 crio[933]: time="2025-09-17 00:04:25.175117525Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=df960967-72d2-42d7-bcbb-6a3655c983be name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:37 addons-069011 crio[933]: time="2025-09-17 00:04:37.174158007Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=320565f3-c394-45b3-824b-039bcd466536 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:37 addons-069011 crio[933]: time="2025-09-17 00:04:37.174477538Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=320565f3-c394-45b3-824b-039bcd466536 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:38 addons-069011 crio[933]: time="2025-09-17 00:04:38.807880257Z" level=info msg="Pulling image: docker.io/nginx:latest" id=4e22c3e3-cd89-49fb-89dd-f613429d36e3 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:04:38 addons-069011 crio[933]: time="2025-09-17 00:04:38.810843318Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 17 00:04:39 addons-069011 crio[933]: time="2025-09-17 00:04:39.174369948Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=476402d2-d153-4429-9217-e32bf0fbd71a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:04:39 addons-069011 crio[933]: time="2025-09-17 00:04:39.174714230Z" level=info msg="Image docker.io/nginx:alpine not found" id=476402d2-d153-4429-9217-e32bf0fbd71a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8fc15d8cb7dd5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	295b9edc02db1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	3bebfc3ce5f89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          8 minutes ago       Running             busybox                                  0                   b34e9dc849123       busybox
	0994d530b2186       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            8 minutes ago       Running             liveness-probe                           0                   e614fc1047195       csi-hostpathplugin-s98vb
	d78ede218b3d9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           10 minutes ago      Running             hostpath                                 0                   e614fc1047195       csi-hostpathplugin-s98vb
	16a4495ac9a55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                11 minutes ago      Running             node-driver-registrar                    0                   e614fc1047195       csi-hostpathplugin-s98vb
	ab63cb98da9fa       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             11 minutes ago      Running             controller                               0                   1c8433f3bdf68       ingress-nginx-controller-9cc49f96f-4m84v
	cb0aaa55cf5e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            12 minutes ago      Running             gadget                                   0                   38b62a86f7523       gadget-g862x
	75b35093f1f14       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              13 minutes ago      Running             registry-proxy                           0                   f2e835ff4c172       registry-proxy-gtpv9
	af48fae595f24       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      13 minutes ago      Running             volume-snapshot-controller               0                   7daa29e729a88       snapshot-controller-7d9fbc56b8-st98r
	fce1ccd8d33b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   13 minutes ago      Running             csi-external-health-monitor-controller   0                   e614fc1047195       csi-hostpathplugin-s98vb
	0e4759a430832       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             14 minutes ago      Exited              patch                                    2                   0937f6f98ea11       ingress-nginx-admission-patch-sp7zb
	3c653d4c50b5c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      14 minutes ago      Running             volume-snapshot-controller               0                   4be25aad82a4e       snapshot-controller-7d9fbc56b8-s7m82
	11ae5f470bf10       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   14 minutes ago      Exited              create                                   0                   d933a3ae75df0       ingress-nginx-admission-create-wj8lw
	0957eacca23bd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              14 minutes ago      Running             csi-resizer                              0                   b8131d2ee78de       csi-hostpath-resizer-0
	ad4a09c21105c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             14 minutes ago      Running             csi-attacher                             0                   15f9a9c33b53e       csi-hostpath-attacher-0
	c1b11b9e2fae1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             14 minutes ago      Running             local-path-provisioner                   0                   be69758a594c2       local-path-provisioner-648f6765c9-4qs6g
	7d0db99be084d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             14 minutes ago      Running             storage-provisioner                      0                   e26878809420e       storage-provisioner
	b62ac7b1e2d93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             14 minutes ago      Running             coredns                                  0                   90cd65a058e3e       coredns-66bc5c9577-m872b
	81f4db589dfd0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             15 minutes ago      Running             kindnet-cni                              0                   282dceccf27e4       kindnet-hn7tx
	8204c89cdc90d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             15 minutes ago      Running             kube-proxy                               0                   076ce47b67764       kube-proxy-v85kq
	d1d2d3ef1a2d6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             15 minutes ago      Running             kube-controller-manager                  0                   2befa508c819b       kube-controller-manager-addons-069011
	f4991aa96dbe9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             15 minutes ago      Running             kube-apiserver                           0                   24f1de8dafedd       kube-apiserver-addons-069011
	ecbc264153ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             15 minutes ago      Running             kube-scheduler                           0                   3af000cb5a57c       kube-scheduler-addons-069011
	5a81076e6d9a8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             15 minutes ago      Running             etcd                                     0                   f590790ed13d4       etcd-addons-069011
	
	
	==> coredns [b62ac7b1e2d935063ca8c0594642886e49ad0423507f04d148e7bd385ca935ce] <==
	[INFO] 10.244.0.16:60831 - 63795 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004048865s
	[INFO] 10.244.0.16:60831 - 61821 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000061741s
	[INFO] 10.244.0.16:60831 - 59506 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.00008781s
	[INFO] 10.244.0.16:60831 - 42957 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000072825s
	[INFO] 10.244.0.16:60831 - 54341 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000102552s
	[INFO] 10.244.0.16:60831 - 62960 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000044411s
	[INFO] 10.244.0.16:60831 - 3318 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000066285s
	[INFO] 10.244.0.16:60831 - 17453 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000128595s
	[INFO] 10.244.0.16:60831 - 65270 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124118s
	[INFO] 10.244.0.16:33095 - 36398 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000246617s
	[INFO] 10.244.0.16:33095 - 33148 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000272807s
	[INFO] 10.244.0.16:33095 - 24382 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000102887s
	[INFO] 10.244.0.16:33095 - 15388 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000127114s
	[INFO] 10.244.0.16:33095 - 18797 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000095227s
	[INFO] 10.244.0.16:33095 - 14114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00010561s
	[INFO] 10.244.0.16:33095 - 42956 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004514349s
	[INFO] 10.244.0.16:33095 - 25547 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004732712s
	[INFO] 10.244.0.16:33095 - 46989 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000102731s
	[INFO] 10.244.0.16:33095 - 63516 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000115934s
	[INFO] 10.244.0.16:33095 - 54173 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000054152s
	[INFO] 10.244.0.16:33095 - 19163 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000094624s
	[INFO] 10.244.0.16:33095 - 59233 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000073464s
	[INFO] 10.244.0.16:33095 - 54606 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000081477s
	[INFO] 10.244.0.16:33095 - 40503 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145971s
	[INFO] 10.244.0.16:33095 - 55526 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000153534s
	
	
	==> describe nodes <==
	Name:               addons-069011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-069011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-069011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-069011
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-069011"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:49:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-069011
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:04:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-069011
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e6a06e1e17043f19f3b8f5ea0927359
	  System UUID:                fa23b867-4022-409a-8baa-bf981ffedafe
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  gadget                      gadget-g862x                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-4m84v                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-m872b                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpathplugin-s98vb                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-069011                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-hn7tx                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-069011                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-069011                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-v85kq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-069011                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-66898fdd98-bl4r5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 registry-proxy-gtpv9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-7d9fbc56b8-s7m82                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-7d9fbc56b8-st98r                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  local-path-storage          local-path-provisioner-648f6765c9-4qs6g                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-069011 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-069011 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-069011 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node addons-069011 event: Registered Node addons-069011 in Controller
	  Normal  NodeReady                14m   kubelet          Node addons-069011 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [5a81076e6d9a8c9983866e09b1190810cd0059c34edeae1a479f9d18f3003a91] <==
	{"level":"warn","ts":"2025-09-16T23:49:01.021210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.034514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.041663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.048524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.061680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.068240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.075225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.081757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.105206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.111554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.154896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.666348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.673196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.575058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.581784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.598000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.605378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:59:00.630787Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1449}
	{"level":"info","ts":"2025-09-16T23:59:00.656834Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1449,"took":"25.282457ms","hash":3232880921,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3645440,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-09-16T23:59:00.656898Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3232880921,"revision":1449,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:04:00.635503Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2177}
	{"level":"info","ts":"2025-09-17T00:04:00.654518Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2177,"took":"18.415625ms","hash":2584493315,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3166208,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-09-17T00:04:00.654575Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2584493315,"revision":2177,"compact-revision":1449}
	
	
	==> kernel <==
	 00:04:49 up  2:47,  0 users,  load average: 0.18, 2.81, 26.05
	Linux addons-069011 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [81f4db589dfd0f8f014a7fc056f2d7f752ecc52737aea10ae2f8a98d0242428b] <==
	I0917 00:02:40.184855       1 main.go:301] handling current node
	I0917 00:02:50.185040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:50.185076       1 main.go:301] handling current node
	I0917 00:03:00.185549       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:00.185583       1 main.go:301] handling current node
	I0917 00:03:10.183901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:10.183928       1 main.go:301] handling current node
	I0917 00:03:20.191472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:20.191511       1 main.go:301] handling current node
	I0917 00:03:30.187813       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:30.188191       1 main.go:301] handling current node
	I0917 00:03:40.185552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:40.185607       1 main.go:301] handling current node
	I0917 00:03:50.191489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:50.191524       1 main.go:301] handling current node
	I0917 00:04:00.185921       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:00.185962       1 main.go:301] handling current node
	I0917 00:04:10.184475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:10.184525       1 main.go:301] handling current node
	I0917 00:04:20.186448       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:20.186489       1 main.go:301] handling current node
	I0917 00:04:30.186221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:30.186262       1 main.go:301] handling current node
	I0917 00:04:40.184500       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:04:40.184573       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4991aa96dbe98af7f934784cdc7973d5aabec72325938f0e98ad8efde3d06e3] <==
	I0916 23:53:51.505661       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:54:19.846477       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:21.099421       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:29.068080       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:56:24.856015       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0916 23:56:38.562764       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43110: use of closed network connection
	E0916 23:56:38.758708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43158: use of closed network connection
	I0916 23:56:47.547088       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 23:56:47.750812       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.94.177"}
	I0916 23:56:48.077381       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.184.141"}
	I0916 23:56:56.387694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0916 23:56:58.875443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:57:28.517320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:21.717919       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:53.740979       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:01.561467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:59:46.839359       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:03.548840       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:10.960424       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:15.531695       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:28.446522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:31.841808       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:34.885369       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:39.392704       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:37.349511       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [d1d2d3ef1a2d61d604d7b7b71875c31a98127791ebbcaaae9e7c5dcebb1fd036] <==
	I0916 23:49:08.558692       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0916 23:49:08.559424       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:49:08.560582       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0916 23:49:08.560682       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0916 23:49:08.562044       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:49:08.562105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:49:08.562171       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:49:08.562209       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:49:08.562217       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:49:08.562221       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:49:08.563325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:08.564561       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:49:08.570797       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-069011" podCIDRs=["10.244.0.0/24"]
	I0916 23:49:08.576824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0916 23:49:38.568454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 23:49:38.568633       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0916 23:49:38.568684       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0916 23:49:38.586865       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0916 23:49:38.591210       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0916 23:49:38.668805       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:38.692110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:49:53.514314       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 23:56:52.202912       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0916 23:58:53.764380       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0917 00:01:02.592919       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [8204c89cdc90d58370aa745a3053c12e5b976409a1e0bedddf9508ac3e770c1f] <==
	I0916 23:49:09.803647       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:49:09.874911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:49:09.984976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:49:09.985628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:49:09.986296       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:49:10.154642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:49:10.159433       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:49:10.183201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:49:10.195463       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:49:10.195513       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:49:10.199563       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:49:10.199664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:49:10.200188       1 config.go:309] "Starting node config controller"
	I0916 23:49:10.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:49:10.200334       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:49:10.200369       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:49:10.200991       1 config.go:200] "Starting service config controller"
	I0916 23:49:10.201078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:49:10.299859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:49:10.300474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:49:10.300501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:49:10.302086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ecbc264153ff2a219390febac6665f8efc1a49ab24db502b79ba6888e6bd5b71] <==
	E0916 23:49:01.591306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0916 23:49:01.591979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:01.591995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:01.592038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:01.592032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0916 23:49:01.592058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:01.592081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0916 23:49:01.592128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:01.592273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:01.592272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:49:01.592315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0916 23:49:02.478666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0916 23:49:02.478742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0916 23:49:02.495998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:02.533597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:02.645572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:02.658831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0916 23:49:02.700650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:02.730028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:02.731014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0916 23:49:02.807698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:49:02.811032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:02.813063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0916 23:49:02.832467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0916 23:49:05.387364       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:03:59 addons-069011 kubelet[1557]: E0917 00:03:59.175469    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:04:04 addons-069011 kubelet[1557]: E0917 00:04:04.372557    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067444372231679  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:04 addons-069011 kubelet[1557]: E0917 00:04:04.372601    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067444372231679  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:08 addons-069011 kubelet[1557]: E0917 00:04:08.709806    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 00:04:08 addons-069011 kubelet[1557]: E0917 00:04:08.709885    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 00:04:08 addons-069011 kubelet[1557]: E0917 00:04:08.710090    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb_local-path-storage(ed2099f3-5b8b-4c41-a38b-24d1fff3085a): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:04:08 addons-069011 kubelet[1557]: E0917 00:04:08.710142    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" podUID="ed2099f3-5b8b-4c41-a38b-24d1fff3085a"
	Sep 17 00:04:08 addons-069011 kubelet[1557]: E0917 00:04:08.800904    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" podUID="ed2099f3-5b8b-4c41-a38b-24d1fff3085a"
	Sep 17 00:04:11 addons-069011 kubelet[1557]: E0917 00:04:11.175581    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:04:13 addons-069011 kubelet[1557]: E0917 00:04:13.174819    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:04:14 addons-069011 kubelet[1557]: E0917 00:04:14.375076    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067454374851837  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:14 addons-069011 kubelet[1557]: E0917 00:04:14.375108    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067454374851837  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:24 addons-069011 kubelet[1557]: E0917 00:04:24.377841    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067464377530100  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:24 addons-069011 kubelet[1557]: E0917 00:04:24.377889    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067464377530100  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:25 addons-069011 kubelet[1557]: E0917 00:04:25.175432    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:04:25 addons-069011 kubelet[1557]: E0917 00:04:25.175452    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:04:34 addons-069011 kubelet[1557]: E0917 00:04:34.379681    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067474379367369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:34 addons-069011 kubelet[1557]: E0917 00:04:34.379726    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067474379367369  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:37 addons-069011 kubelet[1557]: E0917 00:04:37.174845    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:04:38 addons-069011 kubelet[1557]: E0917 00:04:38.807346    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Sep 17 00:04:38 addons-069011 kubelet[1557]: E0917 00:04:38.807459    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Sep 17 00:04:38 addons-069011 kubelet[1557]: E0917 00:04:38.807704    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container minikube-ingress-dns start failed in pod kube-ingress-dns-minikube_kube-system(3ebf3aba-8898-42b1-a92e-3bc50dd56aab): ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:04:38 addons-069011 kubelet[1557]: E0917 00:04:38.807763    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ErrImagePull: \"reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="3ebf3aba-8898-42b1-a92e-3bc50dd56aab"
	Sep 17 00:04:44 addons-069011 kubelet[1557]: E0917 00:04:44.382543    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067484382211464  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:04:44 addons-069011 kubelet[1557]: E0917 00:04:44.382582    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067484382211464  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	
	
	==> storage-provisioner [7d0db99be084d7a7996f085af51ba0b4b9263d1a30c5ba98cac79995b3641b35] <==
	W0917 00:04:24.951072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:26.954965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:26.958886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:28.962735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:28.967442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:30.970797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:30.977640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:32.981319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:32.986899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:34.990659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:34.995502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:36.998980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:37.004372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:39.007504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:39.012502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:41.015821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:41.020933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:43.024352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:43.028406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:45.032117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:45.035973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:47.039558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:47.045311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:49.049685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:49.054343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
helpers_test.go:269: (dbg) Run:  kubectl --context addons-069011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1 (104.463522ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Tue, 16 Sep 2025 23:56:47 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kksmh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kksmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-069011
	  Warning  Failed     102s (x4 over 6m17s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     102s (x4 over 6m17s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    25s (x10 over 6m17s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     25s (x10 over 6m17s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    11s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:01:13 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfz5d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-rfz5d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m37s                default-scheduler  Successfully assigned default/task-pv-pod to addons-069011
	  Warning  Failed     72s (x2 over 2m12s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x2 over 2m12s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    59s (x2 over 2m12s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     59s (x2 over 2m12s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x3 over 3m37s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s54zg (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-s54zg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wj8lw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sp7zb" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "registry-66898fdd98-bl4r5" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 addons disable ingress-dns --alsologtostderr -v=1: (1.066152524s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 addons disable ingress --alsologtostderr -v=1: (7.730207244s)
--- FAIL: TestAddons/parallel/Ingress (492.43s)

                                                
                                    
x
+
TestAddons/parallel/CSI (373.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0917 00:01:03.955426  521273 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0917 00:01:03.959045  521273 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0917 00:01:03.959078  521273 kapi.go:107] duration metric: took 3.685674ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.701178ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-069011 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-069011 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0b15e693-4577-4039-b409-5badaa871bfc] Pending
helpers_test.go:352: "task-pv-pod" [0b15e693-4577-4039-b409-5badaa871bfc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-17 00:07:13.599559671 +0000 UTC m=+1141.066903249
addons_test.go:567: (dbg) Run:  kubectl --context addons-069011 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-069011 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-069011/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:01:13 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.26
IPs:
IP:  10.244.0.26
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfz5d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-rfz5d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-069011
Normal   BackOff    86s (x5 over 4m35s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     86s (x5 over 4m35s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    72s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     4s (x4 over 4m35s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4s (x4 over 4m35s)   kubelet            Error: ErrImagePull
addons_test.go:567: (dbg) Run:  kubectl --context addons-069011 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-069011 logs task-pv-pod -n default: exit status 1 (71.19788ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-069011 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-069011
helpers_test.go:243: (dbg) docker inspect addons-069011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	        "Created": "2025-09-16T23:48:50.029636255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:48:50.075029861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hosts",
	        "LogPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1-json.log",
	        "Name": "/addons-069011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-069011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-069011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	                "LowerDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-069011",
	                "Source": "/var/lib/docker/volumes/addons-069011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-069011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-069011",
	                "name.minikube.sigs.k8s.io": "addons-069011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7ea0b62281ff8981f73b140342aff58601fbb663df7278dfdd6743a41abcca5",
	            "SandboxKey": "/var/run/docker/netns/f7ea0b62281f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-069011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:4c:3e:1e:87:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d62ec0fa3bfb3ffd62859a508f03996c549db14f34473599ddd1b9022067b7b9",
	                    "EndpointID": "f8f4fe858390c8f96bc24eec26736fad3a3b1ba30f09e93e016a6a79e947f7af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-069011",
	                        "678205c9d470"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-069011 -n addons-069011
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 logs -n 25: (1.407824017s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p download-docker-660125 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p download-docker-660125                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p binary-mirror-785971 --alsologtostderr --binary-mirror http://127.0.0.1:38515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p binary-mirror-785971                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ addons  │ enable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ start   │ -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:55 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ enable headlamp -p addons-069011 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                           │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:57 UTC │
	│ addons  │ addons-069011 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ addons  │ addons-069011 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:00 UTC │ 17 Sep 25 00:00 UTC │
	│ addons  │ addons-069011 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ addons  │ addons-069011 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-069011 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-069011 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-069011 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ addons  │ addons-069011 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:27.723751  522590 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:27.723864  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.723869  522590 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:27.723873  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.724066  522590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0916 23:48:27.724618  522590 out.go:368] Setting JSON to false
	I0916 23:48:27.725494  522590 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9051,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:27.725585  522590 start.go:140] virtualization: kvm guest
	I0916 23:48:27.728073  522590 out.go:179] * [addons-069011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:27.729850  522590 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:48:27.729868  522590 notify.go:220] Checking for updates...
	I0916 23:48:27.733822  522590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:27.736141  522590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:27.738039  522590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:27.740423  522590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:48:27.743368  522590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:48:27.746574  522590 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:27.771724  522590 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:27.771874  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.829971  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.818365984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.830249  522590 docker.go:318] overlay module found
	I0916 23:48:27.832946  522590 out.go:179] * Using the docker driver based on user configuration
	I0916 23:48:27.834751  522590 start.go:304] selected driver: docker
	I0916 23:48:27.834826  522590 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:27.834849  522590 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:48:27.835571  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.897913  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.886229333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.898100  522590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:27.898315  522590 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:48:27.900183  522590 out.go:179] * Using Docker driver with root privileges
	I0916 23:48:27.901481  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:27.901597  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:27.901613  522590 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:48:27.901710  522590 start.go:348] cluster config:
	{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0916 23:48:27.903324  522590 out.go:179] * Starting "addons-069011" primary control-plane node in "addons-069011" cluster
	I0916 23:48:27.904623  522590 cache.go:123] Beginning downloading kic base image for docker with crio
	I0916 23:48:27.905841  522590 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:48:27.907270  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:27.907330  522590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:48:27.907328  522590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:27.907354  522590 cache.go:58] Caching tarball of preloaded images
	I0916 23:48:27.907495  522590 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 23:48:27.907513  522590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:48:27.907895  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:27.907924  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json: {Name:mk15dc7feab5fd17bb004b2e5f6ac3bc55ac0d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:27.925199  522590 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:27.925352  522590 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:48:27.925371  522590 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:48:27.925375  522590 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:48:27.925383  522590 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:48:27.925403  522590 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0916 23:48:40.932191  522590 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0916 23:48:40.932224  522590 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:48:40.932259  522590 start.go:360] acquireMachinesLock for addons-069011: {Name:mk9387b718f452cc25627a84d4c20b7f46084ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:48:40.932371  522590 start.go:364] duration metric: took 90.542µs to acquireMachinesLock for "addons-069011"
	I0916 23:48:40.932411  522590 start.go:93] Provisioning new machine with config: &{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:48:40.932527  522590 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:48:40.934531  522590 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0916 23:48:40.934774  522590 start.go:159] libmachine.API.Create for "addons-069011" (driver="docker")
	I0916 23:48:40.934810  522590 client.go:168] LocalClient.Create starting
	I0916 23:48:40.934920  522590 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0916 23:48:41.819608  522590 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0916 23:48:42.094971  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:48:42.113173  522590 cli_runner.go:211] docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:48:42.113240  522590 network_create.go:284] running [docker network inspect addons-069011] to gather additional debugging logs...
	I0916 23:48:42.113258  522590 cli_runner.go:164] Run: docker network inspect addons-069011
	W0916 23:48:42.130815  522590 cli_runner.go:211] docker network inspect addons-069011 returned with exit code 1
	I0916 23:48:42.130846  522590 network_create.go:287] error running [docker network inspect addons-069011]: docker network inspect addons-069011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-069011 not found
	I0916 23:48:42.130884  522590 network_create.go:289] output of [docker network inspect addons-069011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-069011 not found
	
	** /stderr **
	I0916 23:48:42.130990  522590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:42.149832  522590 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002180220}
	I0916 23:48:42.149931  522590 network_create.go:124] attempt to create docker network addons-069011 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:48:42.150036  522590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-069011 addons-069011
	I0916 23:48:42.212157  522590 network_create.go:108] docker network addons-069011 192.168.49.0/24 created
	I0916 23:48:42.212194  522590 kic.go:121] calculated static IP "192.168.49.2" for the "addons-069011" container
	I0916 23:48:42.212312  522590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:48:42.229867  522590 cli_runner.go:164] Run: docker volume create addons-069011 --label name.minikube.sigs.k8s.io=addons-069011 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:48:42.252846  522590 oci.go:103] Successfully created a docker volume addons-069011
	I0916 23:48:42.252968  522590 cli_runner.go:164] Run: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:48:45.649491  522590 cli_runner.go:217] Completed: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (3.39647838s)
	I0916 23:48:45.649523  522590 oci.go:107] Successfully prepared a docker volume addons-069011
	I0916 23:48:45.649558  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:45.649589  522590 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:48:45.649695  522590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:48:49.956300  522590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.306552681s)
	I0916 23:48:49.956343  522590 kic.go:203] duration metric: took 4.306749088s to extract preloaded images to volume ...
	W0916 23:48:49.956477  522590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:48:49.956523  522590 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:48:49.956572  522590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:48:50.013382  522590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-069011 --name addons-069011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-069011 --network addons-069011 --ip 192.168.49.2 --volume addons-069011:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:48:50.304600  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Running}}
	I0916 23:48:50.323420  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.342386  522590 cli_runner.go:164] Run: docker exec addons-069011 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:48:50.402276  522590 oci.go:144] the created container "addons-069011" has a running status.
	I0916 23:48:50.402326  522590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa...
	I0916 23:48:50.521235  522590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:48:50.553384  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.579068  522590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:48:50.579099  522590 kic_runner.go:114] Args: [docker exec --privileged addons-069011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:48:50.638566  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.659803  522590 machine.go:93] provisionDockerMachine start ...
	I0916 23:48:50.660411  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.680019  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.680310  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.680332  522590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:48:50.820950  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.820990  522590 ubuntu.go:182] provisioning hostname "addons-069011"
	I0916 23:48:50.821063  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.841195  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.841673  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.841710  522590 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-069011 && echo "addons-069011" | sudo tee /etc/hostname
	I0916 23:48:50.996855  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.996967  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.016407  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.016637  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.016655  522590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-069011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-069011/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-069011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:48:51.154270  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:48:51.154311  522590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0916 23:48:51.154380  522590 ubuntu.go:190] setting up certificates
	I0916 23:48:51.154420  522590 provision.go:84] configureAuth start
	I0916 23:48:51.154487  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:51.173820  522590 provision.go:143] copyHostCerts
	I0916 23:48:51.173904  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0916 23:48:51.174069  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0916 23:48:51.174140  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0916 23:48:51.174195  522590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.addons-069011 san=[127.0.0.1 192.168.49.2 addons-069011 localhost minikube]
	I0916 23:48:51.417777  522590 provision.go:177] copyRemoteCerts
	I0916 23:48:51.417839  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:48:51.417897  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.435902  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:51.535686  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 23:48:51.563321  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:48:51.590971  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:48:51.617420  522590 provision.go:87] duration metric: took 462.978002ms to configureAuth
	I0916 23:48:51.617461  522590 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:48:51.617668  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:48:51.617795  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.638144  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.638409  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.638436  522590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 23:48:51.891077  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 23:48:51.891114  522590 machine.go:96] duration metric: took 1.230812219s to provisionDockerMachine
	I0916 23:48:51.891125  522590 client.go:171] duration metric: took 10.956309615s to LocalClient.Create
	I0916 23:48:51.891146  522590 start.go:167] duration metric: took 10.956377105s to libmachine.API.Create "addons-069011"
	I0916 23:48:51.891155  522590 start.go:293] postStartSetup for "addons-069011" (driver="docker")
	I0916 23:48:51.891170  522590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:48:51.891245  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:48:51.891288  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.909900  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.010593  522590 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:48:52.014317  522590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:48:52.014357  522590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:48:52.014366  522590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:48:52.014375  522590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:48:52.014406  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0916 23:48:52.014479  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0916 23:48:52.014515  522590 start.go:296] duration metric: took 123.348567ms for postStartSetup
	I0916 23:48:52.014852  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.034024  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:52.034357  522590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:48:52.034430  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.053383  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.147697  522590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:48:52.152300  522590 start.go:128] duration metric: took 11.219755748s to createHost
	I0916 23:48:52.152322  522590 start.go:83] releasing machines lock for "addons-069011", held for 11.219940729s
	I0916 23:48:52.152383  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.170897  522590 ssh_runner.go:195] Run: cat /version.json
	I0916 23:48:52.170959  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.170960  522590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:48:52.171033  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.190054  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.190316  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.282770  522590 ssh_runner.go:195] Run: systemctl --version
	I0916 23:48:52.358127  522590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 23:48:52.500662  522590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:48:52.505640  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.530299  522590 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:48:52.530413  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.562277  522590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:48:52.562302  522590 start.go:495] detecting cgroup driver to use...
	I0916 23:48:52.562333  522590 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:48:52.562405  522590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:48:52.578904  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:48:52.592493  522590 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:48:52.592567  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:48:52.607812  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:48:52.623718  522590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:48:52.695401  522590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:48:52.772869  522590 docker.go:234] disabling docker service ...
	I0916 23:48:52.772931  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:48:52.793499  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:48:52.806446  522590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:48:52.880604  522590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:48:52.994666  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:48:53.008181  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:48:53.026581  522590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0916 23:48:53.026648  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.040463  522590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0916 23:48:53.040546  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.052415  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.063700  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.074445  522590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:48:53.085081  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.097098  522590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.114871  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.125827  522590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:48:53.135170  522590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:48:53.145546  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.253634  522590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 23:48:53.356442  522590 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 23:48:53.356540  522590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 23:48:53.360459  522590 start.go:563] Will wait 60s for crictl version
	I0916 23:48:53.360526  522590 ssh_runner.go:195] Run: which crictl
	I0916 23:48:53.364103  522590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:48:53.402094  522590 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 23:48:53.402233  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.441123  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.481919  522590 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0916 23:48:53.483462  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:53.502054  522590 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:48:53.506129  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.518646  522590 kubeadm.go:875] updating cluster {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:48:53.518762  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:53.518816  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.590933  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.590961  522590 crio.go:433] Images already preloaded, skipping extraction
	I0916 23:48:53.591020  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.627023  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.627057  522590 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:48:53.627066  522590 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0916 23:48:53.627155  522590 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-069011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:48:53.627228  522590 ssh_runner.go:195] Run: crio config
	I0916 23:48:53.674869  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:53.674893  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:53.674906  522590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:48:53.674926  522590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-069011 NodeName:addons-069011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:48:53.675093  522590 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-069011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:48:53.675157  522590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:48:53.685496  522590 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:48:53.685568  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 23:48:53.695890  522590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 23:48:53.715420  522590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:48:53.738183  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:48:53.758975  522590 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 23:48:53.763002  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.775153  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.837066  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:48:53.861100  522590 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011 for IP: 192.168.49.2
	I0916 23:48:53.861120  522590 certs.go:194] generating shared ca certs ...
	I0916 23:48:53.861145  522590 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:53.861308  522590 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0916 23:48:54.155814  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt ...
	I0916 23:48:54.155846  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt: {Name:mk009b1713fd08c38e8c6ac054b69276424ded29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156071  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key ...
	I0916 23:48:54.156093  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key: {Name:mk39b68875de7851b17692da85e287f48166d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156213  522590 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0916 23:48:54.291541  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt ...
	I0916 23:48:54.291579  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt: {Name:mk94baf5fb1a8134bb0c9a9f3d32b751fe0bf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291793  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key ...
	I0916 23:48:54.291817  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key: {Name:mk06b3e70f919971eec12f66023f6279f2a9059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291928  522590 certs.go:256] generating profile certs ...
	I0916 23:48:54.292014  522590 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key
	I0916 23:48:54.292060  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt with IP's: []
	I0916 23:48:54.529110  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt ...
	I0916 23:48:54.529147  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: {Name:mk9156e00306316f93255eae42ecd81bb5d60b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529374  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key ...
	I0916 23:48:54.529406  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key: {Name:mk15bd78effcf8815d5571a84284c31db31b997e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529525  522590 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd
	I0916 23:48:54.529556  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 23:48:54.601370  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd ...
	I0916 23:48:54.601415  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd: {Name:mkb42f86b810cddd05c27083cd910769800b1942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.602548  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd ...
	I0916 23:48:54.602578  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd: {Name:mkf41ec91a0589b4d908c830ee946e4604a6886c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.603343  522590 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt
	I0916 23:48:54.603493  522590 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key
	I0916 23:48:54.603577  522590 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key
	I0916 23:48:54.603602  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt with IP's: []
	I0916 23:48:54.685718  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt ...
	I0916 23:48:54.685751  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt: {Name:mk4c4f7fbd326f3d00c11caa86441b715a5844e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.686777  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key ...
	I0916 23:48:54.686809  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key: {Name:mkde64e1b9ef5bdc16ad6f2b11b391d65f689b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.687062  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:48:54.687107  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0916 23:48:54.687130  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:48:54.687161  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0916 23:48:54.687932  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:48:54.717259  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:48:54.744669  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:48:54.771438  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 23:48:54.799454  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 23:48:54.826220  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:48:54.853243  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:48:54.878912  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 23:48:54.905711  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:48:54.935757  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:48:54.956698  522590 ssh_runner.go:195] Run: openssl version
	I0916 23:48:54.962817  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:48:54.976805  522590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.980979  522590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.981051  522590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.988637  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:48:55.000379  522590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:48:55.004385  522590 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:48:55.004456  522590 kubeadm.go:392] StartCluster: {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:48:55.004547  522590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 23:48:55.004599  522590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:48:55.043443  522590 cri.go:89] found id: ""
	I0916 23:48:55.043525  522590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:48:55.053975  522590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:48:55.064119  522590 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:48:55.064186  522590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:48:55.074381  522590 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:48:55.074421  522590 kubeadm.go:157] found existing configuration files:
	
	I0916 23:48:55.074469  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:48:55.084667  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:48:55.084749  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:48:55.095859  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:48:55.106006  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:48:55.106068  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:48:55.115485  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.124880  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:48:55.124952  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.134292  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:48:55.144662  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:48:55.144725  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:48:55.154111  522590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:48:55.211692  522590 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:48:55.271378  522590 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:49:04.949743  522590 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:49:04.949820  522590 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:49:04.949928  522590 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:49:04.950016  522590 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:49:04.950100  522590 kubeadm.go:310] OS: Linux
	I0916 23:49:04.950168  522590 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:49:04.950250  522590 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:49:04.950311  522590 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:49:04.950355  522590 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:49:04.950436  522590 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:49:04.950511  522590 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:49:04.950590  522590 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:49:04.950659  522590 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:49:04.950779  522590 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:49:04.950896  522590 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:49:04.950988  522590 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:49:04.951039  522590 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:49:04.953148  522590 out.go:252]   - Generating certificates and keys ...
	I0916 23:49:04.953253  522590 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:49:04.953350  522590 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:49:04.953473  522590 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:49:04.953544  522590 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:49:04.953598  522590 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:49:04.953656  522590 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:49:04.953723  522590 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:49:04.953871  522590 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.953944  522590 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:49:04.954104  522590 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.954204  522590 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:49:04.954308  522590 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:49:04.954373  522590 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:49:04.954472  522590 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:49:04.954529  522590 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:49:04.954641  522590 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:49:04.954719  522590 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:49:04.954827  522590 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:49:04.954889  522590 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:49:04.954961  522590 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:49:04.955029  522590 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:49:04.956667  522590 out.go:252]   - Booting up control plane ...
	I0916 23:49:04.956807  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:49:04.956925  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:49:04.956985  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:49:04.957219  522590 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:49:04.957368  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:49:04.957516  522590 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:49:04.957633  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:49:04.957703  522590 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:49:04.957908  522590 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:49:04.958044  522590 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:49:04.958151  522590 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.203651ms
	I0916 23:49:04.958278  522590 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:49:04.958374  522590 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:49:04.958531  522590 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:49:04.958637  522590 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:49:04.958758  522590 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.870805967s
	I0916 23:49:04.958876  522590 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.059203573s
	I0916 23:49:04.958980  522590 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002212231s
	I0916 23:49:04.959143  522590 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:49:04.959322  522590 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:49:04.959464  522590 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:49:04.959729  522590 kubeadm.go:310] [mark-control-plane] Marking the node addons-069011 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:49:04.959828  522590 kubeadm.go:310] [bootstrap-token] Using token: hth27u.vwd374r3m591cy8w
	I0916 23:49:04.961508  522590 out.go:252]   - Configuring RBAC rules ...
	I0916 23:49:04.961663  522590 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:49:04.961761  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:49:04.961918  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:49:04.962103  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:49:04.962249  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:49:04.962324  522590 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:49:04.962449  522590 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:49:04.962510  522590 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:49:04.962584  522590 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:49:04.962595  522590 kubeadm.go:310] 
	I0916 23:49:04.962677  522590 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:49:04.962687  522590 kubeadm.go:310] 
	I0916 23:49:04.962800  522590 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:49:04.962816  522590 kubeadm.go:310] 
	I0916 23:49:04.962858  522590 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:49:04.962957  522590 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:49:04.963031  522590 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:49:04.963041  522590 kubeadm.go:310] 
	I0916 23:49:04.963139  522590 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:49:04.963150  522590 kubeadm.go:310] 
	I0916 23:49:04.963217  522590 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:49:04.963226  522590 kubeadm.go:310] 
	I0916 23:49:04.963305  522590 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:49:04.963432  522590 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:49:04.963527  522590 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:49:04.963541  522590 kubeadm.go:310] 
	I0916 23:49:04.963668  522590 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:49:04.963778  522590 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:49:04.963792  522590 kubeadm.go:310] 
	I0916 23:49:04.963908  522590 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964060  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0916 23:49:04.964108  522590 kubeadm.go:310] 	--control-plane 
	I0916 23:49:04.964118  522590 kubeadm.go:310] 
	I0916 23:49:04.964224  522590 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:49:04.964234  522590 kubeadm.go:310] 
	I0916 23:49:04.964354  522590 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964531  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0916 23:49:04.964546  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:49:04.964565  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:49:04.966440  522590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:49:04.968135  522590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:49:04.972876  522590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:49:04.972901  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:49:04.992864  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:49:05.238639  522590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:49:05.238825  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.238851  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-069011 minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-069011 minikube.k8s.io/primary=true
	I0916 23:49:05.248222  522590 ops.go:34] apiserver oom_adj: -16
	I0916 23:49:05.324340  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.825316  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.324537  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.824724  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.325050  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.824729  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.325083  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.824525  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.324551  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.825331  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.895926  522590 kubeadm.go:1105] duration metric: took 4.65716259s to wait for elevateKubeSystemPrivileges
	I0916 23:49:09.895964  522590 kubeadm.go:394] duration metric: took 14.891511977s to StartCluster
	I0916 23:49:09.895989  522590 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896108  522590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:49:09.896612  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896807  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:49:09.896820  522590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:49:09.896883  522590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 23:49:09.897046  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.897061  522590 addons.go:69] Setting volcano=true in profile "addons-069011"
	I0916 23:49:09.897068  522590 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897082  522590 addons.go:238] Setting addon volcano=true in "addons-069011"
	I0916 23:49:09.897052  522590 addons.go:69] Setting yakd=true in profile "addons-069011"
	I0916 23:49:09.897090  522590 addons.go:69] Setting registry-creds=true in profile "addons-069011"
	I0916 23:49:09.897102  522590 addons.go:238] Setting addon yakd=true in "addons-069011"
	I0916 23:49:09.897112  522590 addons.go:238] Setting addon registry-creds=true in "addons-069011"
	I0916 23:49:09.897122  522590 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-069011"
	I0916 23:49:09.897128  522590 addons.go:69] Setting storage-provisioner=true in profile "addons-069011"
	I0916 23:49:09.897146  522590 addons.go:69] Setting volumesnapshots=true in profile "addons-069011"
	I0916 23:49:09.897161  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897169  522590 addons.go:69] Setting metrics-server=true in profile "addons-069011"
	I0916 23:49:09.897176  522590 addons.go:69] Setting cloud-spanner=true in profile "addons-069011"
	I0916 23:49:09.897178  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897047  522590 addons.go:69] Setting inspektor-gadget=true in profile "addons-069011"
	I0916 23:49:09.897165  522590 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897206  522590 addons.go:238] Setting addon cloud-spanner=true in "addons-069011"
	I0916 23:49:09.897216  522590 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-069011"
	I0916 23:49:09.897232  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897233  522590 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-069011"
	I0916 23:49:09.897264  522590 addons.go:238] Setting addon inspektor-gadget=true in "addons-069011"
	I0916 23:49:09.897181  522590 addons.go:238] Setting addon metrics-server=true in "addons-069011"
	I0916 23:49:09.897423  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897445  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897164  522590 addons.go:238] Setting addon volumesnapshots=true in "addons-069011"
	I0916 23:49:09.897586  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897092  522590 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-069011"
	I0916 23:49:09.897619  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-069011"
	I0916 23:49:09.897820  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897823  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897828  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897883  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897925  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897931  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.898010  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897153  522590 addons.go:238] Setting addon storage-provisioner=true in "addons-069011"
	I0916 23:49:09.898348  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897270  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897123  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.898989  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.899031  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897162  522590 addons.go:69] Setting registry=true in profile "addons-069011"
	I0916 23:49:09.899114  522590 addons.go:238] Setting addon registry=true in "addons-069011"
	I0916 23:49:09.899147  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897135  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897171  522590 addons.go:69] Setting default-storageclass=true in profile "addons-069011"
	I0916 23:49:09.899508  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-069011"
	I0916 23:49:09.897278  522590 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:09.899697  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897286  522590 addons.go:69] Setting ingress=true in profile "addons-069011"
	I0916 23:49:09.899882  522590 addons.go:238] Setting addon ingress=true in "addons-069011"
	I0916 23:49:09.899918  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897295  522590 addons.go:69] Setting gcp-auth=true in profile "addons-069011"
	I0916 23:49:09.899976  522590 mustload.go:65] Loading cluster: addons-069011
	I0916 23:49:09.897305  522590 addons.go:69] Setting ingress-dns=true in profile "addons-069011"
	I0916 23:49:09.900142  522590 addons.go:238] Setting addon ingress-dns=true in "addons-069011"
	I0916 23:49:09.900176  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.900346  522590 out.go:179] * Verifying Kubernetes components...
	I0916 23:49:09.902141  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:49:09.906029  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906489  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906586  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906921  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.907068  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909270  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909876  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.910613  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906032  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.966036  522590 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-069011"
	I0916 23:49:09.966110  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.966784  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	W0916 23:49:09.981981  522590 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 23:49:09.986930  522590 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0916 23:49:09.989771  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 23:49:09.989801  522590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 23:49:09.989878  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.990151  522590 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0916 23:49:09.991871  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 23:49:09.992484  522590 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0916 23:49:09.993934  522590 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:09.993954  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 23:49:09.994025  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.994418  522590 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:09.994431  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0916 23:49:09.994485  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.997452  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 23:49:09.997452  522590 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0916 23:49:10.001152  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 23:49:10.001192  522590 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.001229  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0916 23:49:10.001311  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.003359  522590 addons.go:238] Setting addon default-storageclass=true in "addons-069011"
	I0916 23:49:10.003429  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.003879  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:10.004609  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 23:49:10.006166  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 23:49:10.007322  522590 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0916 23:49:10.008643  522590 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 23:49:10.008663  522590 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.008684  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 23:49:10.008820  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 23:49:10.008829  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.010190  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 23:49:10.010220  522590 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 23:49:10.010294  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.012486  522590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:49:10.012564  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 23:49:10.014826  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.014910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:49:10.015167  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.016771  522590 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0916 23:49:10.018372  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.018418  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0916 23:49:10.018493  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 23:49:10.018494  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.019739  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 23:49:10.019764  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 23:49:10.019840  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.023104  522590 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0916 23:49:10.023240  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0916 23:49:10.024340  522590 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 23:49:10.024365  522590 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0916 23:49:10.024441  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.025784  522590 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0916 23:49:10.025900  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.027422  522590 out.go:179]   - Using image docker.io/registry:3.0.0
	I0916 23:49:10.029503  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.032360  522590 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 23:49:10.032382  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 23:49:10.032451  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.032643  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.037094  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.038113  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.038152  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 23:49:10.038221  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.058927  522590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.058950  522590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:49:10.059009  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.063705  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 23:49:10.066747  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 23:49:10.066781  522590 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 23:49:10.066937  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.067231  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.069660  522590 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 23:49:10.072852  522590 out.go:179]   - Using image docker.io/busybox:stable
	I0916 23:49:10.077706  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.077738  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 23:49:10.077812  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.081171  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099594  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099601  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.101679  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.103303  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.109277  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.113014  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:49:10.114406  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.114692  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:49:10.116962  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.132677  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.135654  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.137795  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.144377  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.149192  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.245816  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 23:49:10.245838  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 23:49:10.253803  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:10.256108  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.265944  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:10.288794  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 23:49:10.288827  522590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 23:49:10.291276  522590 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.291301  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0916 23:49:10.298027  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.301761  522590 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 23:49:10.301815  522590 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 23:49:10.303881  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 23:49:10.303906  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 23:49:10.307619  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.321011  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.321513  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 23:49:10.321533  522590 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 23:49:10.335228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.342628  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.353105  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.360830  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 23:49:10.360864  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 23:49:10.366097  522590 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.366124  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 23:49:10.368966  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.368997  522590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 23:49:10.374870  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 23:49:10.374897  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 23:49:10.383228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.419473  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 23:49:10.419505  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 23:49:10.420148  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 23:49:10.420173  522590 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 23:49:10.431466  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 23:49:10.431495  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 23:49:10.431508  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.447520  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.491601  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 23:49:10.491635  522590 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 23:49:10.495666  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 23:49:10.495699  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 23:49:10.522266  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 23:49:10.522304  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 23:49:10.608119  522590 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:49:10.610081  522590 node_ready.go:35] waiting up to 6m0s for node "addons-069011" to be "Ready" ...
	I0916 23:49:10.613978  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.614095  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 23:49:10.619888  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 23:49:10.619918  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 23:49:10.636272  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 23:49:10.636303  522590 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 23:49:10.689230  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.705272  522590 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.705297  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 23:49:10.708368  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 23:49:10.708557  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 23:49:10.788275  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 23:49:10.788306  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 23:49:10.806501  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.869607  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 23:49:10.869632  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 23:49:10.937889  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 23:49:10.937914  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 23:49:11.002071  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.002102  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 23:49:11.047895  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.130142  522590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-069011" context rescaled to 1 replicas
	I0916 23:49:11.643350  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.290178117s)
	I0916 23:49:11.643439  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.30078278s)
	I0916 23:49:11.643452  522590 addons.go:479] Verifying addon ingress=true in "addons-069011"
	I0916 23:49:11.643582  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.212051777s)
	I0916 23:49:11.643613  522590 addons.go:479] Verifying addon registry=true in "addons-069011"
	I0916 23:49:11.643522  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.260251451s)
	I0916 23:49:11.643722  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.196160875s)
	W0916 23:49:11.643735  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.643740  522590 addons.go:479] Verifying addon metrics-server=true in "addons-069011"
	I0916 23:49:11.643761  522590 retry.go:31] will retry after 298.602868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.646501  522590 out.go:179] * Verifying registry addon...
	I0916 23:49:11.646501  522590 out.go:179] * Verifying ingress addon...
	I0916 23:49:11.646504  522590 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-069011 service yakd-dashboard -n yakd-dashboard
	
	I0916 23:49:11.652191  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 23:49:11.652206  522590 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 23:49:11.655147  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:11.655173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:11.655271  522590 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 23:49:11.655299  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:11.943533  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:12.143203  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.336408881s)
	W0916 23:49:12.143280  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143297  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095362374s)
	I0916 23:49:12.143318  522590 retry.go:31] will retry after 271.042655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143322  522590 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:12.145833  522590 out.go:179] * Verifying csi-hostpath-driver addon...
	I0916 23:49:12.148236  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 23:49:12.153014  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:12.153041  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.157053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.157321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.415287  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0916 23:49:12.575627  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:12.575662  522590 retry.go:31] will retry after 298.950278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:12.614105  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:12.652906  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.655120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.655721  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.875699  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:13.152262  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.155946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.156155  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:13.653200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.655268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.655558  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.152741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.154674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.154869  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.651414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.654802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.929904  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51454475s)
	I0916 23:49:14.929925  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.05417803s)
	W0916 23:49:14.929968  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:14.929993  522590 retry.go:31] will retry after 724.402782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:15.113335  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:15.152058  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.155353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.155409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:15.651139  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.655103  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:15.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.152053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.155268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.155481  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:16.233482  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.233517  522590 retry.go:31] will retry after 528.645422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.654976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.763126  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:17.152861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.155374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:17.346237  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:17.346292  522590 retry.go:31] will retry after 1.241721728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:17.613291  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:17.637138  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 23:49:17.637240  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.651912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.655594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:17.659459  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.770859  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 23:49:17.790444  522590 addons.go:238] Setting addon gcp-auth=true in "addons-069011"
	I0916 23:49:17.790517  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:17.790880  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:17.810255  522590 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 23:49:17.810334  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.829504  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.924366  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:17.925772  522590 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0916 23:49:17.926989  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 23:49:17.927016  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 23:49:17.947928  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 23:49:17.947963  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 23:49:17.968887  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:17.968910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 23:49:17.988471  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:18.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.155501  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.155799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.360333  522590 addons.go:479] Verifying addon gcp-auth=true in "addons-069011"
	I0916 23:49:18.361695  522590 out.go:179] * Verifying gcp-auth addon...
	I0916 23:49:18.364169  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 23:49:18.367024  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 23:49:18.367044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:18.588324  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:18.652355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.654775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.655329  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.867741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:19.151755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.154903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.154930  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:19.161345  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.161383  522590 retry.go:31] will retry after 2.165570319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:19.614026  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:19.652152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.655765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.655827  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:19.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.151387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.154666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.154897  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.368600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.651411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.655000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.655011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.868027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.151730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.155244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.155464  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.327698  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:21.367411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.650905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.655659  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.655769  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.867968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:21.902069  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:21.902100  522590 retry.go:31] will retry after 1.920767743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:22.113269  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:22.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.154840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:22.651563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.655020  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.868412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.151599  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.155033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.155245  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.367616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.651422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.654714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.654854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.823078  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:23.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.113772  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:24.152012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.155306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.155536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.367843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.396574  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.396608  522590 retry.go:31] will retry after 5.249600328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.651892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.655528  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.868048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.152228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.155056  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.368598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.651661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.655231  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.655269  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.867507  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:26.151287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.155745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.155923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.368083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:26.612894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:26.652086  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.655500  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.867894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.151727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.155077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.368077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.652080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.655544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:28.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.155039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.155194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.367271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:28.613247  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:28.652605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.654553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.868444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.151120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.155325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.155404  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.367903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.646635  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:29.651947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.655369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.655591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.868090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.151994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:30.222879  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.222909  522590 retry.go:31] will retry after 6.679975361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.368039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.655141  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.655354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:30.867036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:31.112894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:31.151818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.155291  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.367578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:31.651196  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.655723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.655764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.867818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.152173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.155965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.156115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.367078  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.652733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.655287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.655347  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.867604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:33.113866  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:33.151850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.155462  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.367548  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:33.651173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.655487  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.655550  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.151692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.154752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.154822  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.367980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.652127  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.655730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.868271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:35.151839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.155765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.155925  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.368376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:35.613366  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:35.651791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.655929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.656002  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.868276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.152007  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.155379  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.367593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.652140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.655627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.655826  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.903759  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:37.152322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.155245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.367621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:37.484516  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:37.484552  522590 retry.go:31] will retry after 4.853736845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:37.613755  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:37.651588  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.654987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.655126  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.867377  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.154847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.155074  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.651724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.655174  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.867641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:39.151291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:39.613957  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:39.652056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.655324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.655427  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.155213  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.155515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.367629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.652268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.655504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.655716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.867786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.151908  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.155026  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.155219  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.367009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.652274  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.654993  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.868497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.113784  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:42.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.156178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.156253  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.339312  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:42.368085  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:42.653863  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.656534  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.656609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.867016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.931965  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:42.932013  522590 retry.go:31] will retry after 9.201032876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:43.151738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.155452  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.157165  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.367931  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:43.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.655792  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.655791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.868283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:44.151192  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.155952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.156077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.368187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:44.612897  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:44.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.655165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.867416  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.155365  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.155527  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.367088  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.652905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.655224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.655382  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.867470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:46.152562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.155553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.155698  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.367899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:46.613967  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:46.652183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.867721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.155062  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.155242  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.367292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.652156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.655812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.656147  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.867423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.152152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.155526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.155678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.367871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.651966  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.655104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.655456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:49.113864  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:49.151422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.154601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.154659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.368059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:49.651895  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.655081  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.655227  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.867193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.154433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.154532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.367752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.614048  522590 node_ready.go:49] node "addons-069011" is "Ready"
	I0916 23:49:50.614124  522590 node_ready.go:38] duration metric: took 40.004018622s for node "addons-069011" to be "Ready" ...
	I0916 23:49:50.614142  522590 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:49:50.614260  522590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:49:50.634002  522590 api_server.go:72] duration metric: took 40.737149121s to wait for apiserver process to appear ...
	I0916 23:49:50.634037  522590 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:49:50.634066  522590 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:49:50.639530  522590 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:49:50.640709  522590 api_server.go:141] control plane version: v1.34.0
	I0916 23:49:50.640743  522590 api_server.go:131] duration metric: took 6.69752ms to wait for apiserver health ...
	I0916 23:49:50.640754  522590 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:49:50.645035  522590 system_pods.go:59] 20 kube-system pods found
	I0916 23:49:50.645109  522590 system_pods.go:61] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.645119  522590 system_pods.go:61] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.645126  522590 system_pods.go:61] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.645131  522590 system_pods.go:61] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.645134  522590 system_pods.go:61] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.645138  522590 system_pods.go:61] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.645141  522590 system_pods.go:61] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.645146  522590 system_pods.go:61] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.645150  522590 system_pods.go:61] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.645156  522590 system_pods.go:61] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.645165  522590 system_pods.go:61] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.645171  522590 system_pods.go:61] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.645182  522590 system_pods.go:61] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.645192  522590 system_pods.go:61] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.645206  522590 system_pods.go:61] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.645211  522590 system_pods.go:61] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.645217  522590 system_pods.go:61] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.645222  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645231  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645238  522590 system_pods.go:61] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.645253  522590 system_pods.go:74] duration metric: took 4.491675ms to wait for pod list to return data ...
	I0916 23:49:50.645267  522590 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:49:50.649832  522590 default_sa.go:45] found service account: "default"
	I0916 23:49:50.649863  522590 default_sa.go:55] duration metric: took 4.587184ms for default service account to be created ...
	I0916 23:49:50.649876  522590 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:49:50.651240  522590 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:50.651263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.653416  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.653453  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.653463  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.653471  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.653478  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.653507  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.653517  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.653523  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.653531  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.653541  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.653553  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.653564  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.653570  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.653577  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.653586  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.653604  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.653610  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.653621  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.653630  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653641  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653649  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.653671  522590 retry.go:31] will retry after 286.454663ms: missing components: kube-dns
	I0916 23:49:50.654669  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:50.654689  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.655263  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.970963  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.971008  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.971021  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.971032  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:50.971040  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:50.971049  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:50.971060  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.971067  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.971075  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.971081  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.971093  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.971098  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.971107  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.971115  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.971127  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.971139  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.971149  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:50.971487  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.971519  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971529  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971537  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.971560  522590 retry.go:31] will retry after 250.710433ms: missing components: kube-dns
	I0916 23:49:51.152661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.154830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.154922  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.227146  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.227184  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.227191  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:51.227200  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.227206  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.227213  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.227219  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.227223  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.227226  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.227230  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.227235  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.227241  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.227244  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.227250  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.227256  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.227261  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.227265  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.227272  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.227277  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227286  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227292  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:51.227310  522590 retry.go:31] will retry after 293.334556ms: missing components: kube-dns
	I0916 23:49:51.368304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:51.526481  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.526535  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.526545  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Running
	I0916 23:49:51.526559  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.526572  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.526582  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.526589  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.526595  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.526601  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.526608  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.526618  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.526623  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.526629  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.526635  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.526645  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.526690  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.526699  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.526714  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.526722  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526731  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526737  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Running
	I0916 23:49:51.526755  522590 system_pods.go:126] duration metric: took 876.872082ms to wait for k8s-apps to be running ...
	I0916 23:49:51.526767  522590 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:49:51.526834  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:49:51.543571  522590 system_svc.go:56] duration metric: took 16.790922ms WaitForService to wait for kubelet
	I0916 23:49:51.543604  522590 kubeadm.go:578] duration metric: took 41.646760707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:49:51.543633  522590 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:49:51.546804  522590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:49:51.546832  522590 node_conditions.go:123] node cpu capacity is 8
	I0916 23:49:51.546851  522590 node_conditions.go:105] duration metric: took 3.210939ms to run NodePressure ...
	I0916 23:49:51.546866  522590 start.go:241] waiting for startup goroutines ...
	I0916 23:49:51.653201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.655460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.655502  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.867905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.133215  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:52.152421  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.155318  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:52.367901  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.651612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:52.780604  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.780644  522590 retry.go:31] will retry after 11.236841486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.867960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.152499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.155690  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.369120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.653294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.655366  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.867612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.152263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.154786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.154825  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.368535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.651809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.655532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.655654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.868318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.152216  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.154997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.155198  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.368885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.652607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.868072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.153735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.155961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.156369  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.367288  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.654554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.654654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.867827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.152232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.154799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.154814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.368344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.651690  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.655166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.655327  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.867912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.152149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.155593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.155720  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.367868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.654817  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.867989  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.154848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.154899  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.368414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.651849  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.655048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.655193  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.866961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.152429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.154913  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.154932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.367821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.652008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.655477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.867460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.152318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.155248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.155323  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.651746  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.655519  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.655601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.867766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.152212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.154600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.154831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.367336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.655315  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.655331  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.154749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.154818  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.651319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.655739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.655966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.868159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:04.018435  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:04.151970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.155986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.156204  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.367594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:04.598781  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.598815  522590 retry.go:31] will retry after 23.829016694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.655382  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.867585  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.151943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.367838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.652819  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.868265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.151902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.155278  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.367335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.651933  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.655376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.867544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.151927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.155463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.155566  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.367946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.652554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.655150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.655250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.867104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.154867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.154932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.367820  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.652108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.655674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.867488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.151318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.155660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.155771  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.368018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.652352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.867979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.154744  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.367888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.652342  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.868023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.152284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.154741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.154823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.368224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.651602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.654730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.655430  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.152453  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.155233  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.367898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.652236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.654831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.654839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.868375  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.151282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.155678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.155786  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.368346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.652132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.655641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.655658  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.867735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.152048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.155624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.367645  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.651952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.655433  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.867300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.151804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.155275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.155321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.367103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.651754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.655590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.655740  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.868629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.155556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.155585  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.367279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.651583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.655042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.867499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.151753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.154889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.368258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.651448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.655920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.655988  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.868165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.155157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.368301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.654851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.655022  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.868093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.154885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.154951  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.368636  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.651987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.655509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.655549  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.867433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.154985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.155048  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.368109  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.651638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.654894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.654923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.867870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.155357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.155505  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.368035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.652897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.656101  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.656100  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.152943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.155198  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.367576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.655870  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.867990  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.152723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.155609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.155624  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.653531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.655283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.867298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.151888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.155832  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.155956  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.373346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.652179  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.655942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.656079  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.867787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.152745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.156266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.156485  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.367952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.655819  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.867860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.153299  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.155510  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.155645  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.367671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.655448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.655652  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.867254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.151981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.156009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.156850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.367744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.654351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.656634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.656737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.868098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.153435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.156745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.156944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.367835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.428940  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:28.651949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.655492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.655714  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.866833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:29.128531  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.128569  522590 retry.go:31] will retry after 40.39789771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.154066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.156666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.156872  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.367799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:29.652238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.654780  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.655095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.867922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.152458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.155006  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.155093  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.367812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.652850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.867340  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.151917  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.155386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.155417  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.367531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.653268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.657791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.657831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.868270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.155469  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.157902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.158614  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.368334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.656126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.656171  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.155033  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.156187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.366965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.655162  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.655350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.868673  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.152675  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.155008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.155063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.368239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.655185  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.867899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.152626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.155359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.155446  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.367305  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.652378  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.655807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.868004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.152291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.155228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.155274  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.367904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.652666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.655056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.868245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.153660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.155936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.156021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.367947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.652965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.654916  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.654970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.867352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.152079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.155581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.155593  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.652943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.655717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.868640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.152316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.155082  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.155138  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.368233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.651993  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.654885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.655026  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.868217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.152059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.155525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.155590  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.367907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.152251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.154763  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.367545  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.654751  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.868012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.154862  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.154889  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.368681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.652243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.654497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.654707  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.152560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.156124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.156157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.367649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.652430  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.654968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.654986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.867477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.151715  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.154833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.154926  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.368003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.652097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.655411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.655482  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.151785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.155294  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.367710  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.652316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.654798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.654835  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.867771  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.151940  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.155638  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.367470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.652017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.655632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.655678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.152166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.155566  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.155778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.653210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.655490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.655647  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.867856  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.152084  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.155486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.155488  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.367425  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.654912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.151097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.155642  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.155716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.652527  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.654528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.654540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.867508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.152341  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.367631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.651795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.654967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.655191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.867951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.155228  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.368136  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.654278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.658434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.658602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.867554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.151825  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.154981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.155043  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.368227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.651587  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.654841  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.868253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.151568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.154906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.368332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.652244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.654695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.654772  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.867872  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.152199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.155137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.367783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.652699  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.654783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.654979  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.868132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.152259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.154768  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.367668  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.652881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.655002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.655049  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.868381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.151518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.367620  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.651888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.655083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.655175  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.868708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.152144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.155438  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.155487  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.367472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.652234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.654836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.654874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.867903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.152561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.154532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.154668  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.367739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.655541  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.867577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.155130  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.368654  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.652953  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.654943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.654982  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.868114  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.151581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.155143  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.368473  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.651816  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.655282  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.867147  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.151121  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.155456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.367218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.651621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.654783  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.152018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.155576  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.367896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.655222  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.655273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.867265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.151348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.156159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.156250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.367497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.652167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.655608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.655715  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.867725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.155471  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.155479  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.367579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.652472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.867055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.153048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.155508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.155556  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.367853  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.653083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.655046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.655090  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.867138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.152134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.155674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.367789  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.652335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.654809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.654932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.868697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.152531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.154911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.154955  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.370805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.652428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.654916  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.868557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.151860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.155090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.155145  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.367368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.651698  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.654852  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.868069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.151519  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.154937  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.154942  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.368515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.526750  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:51:09.652541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.655572  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.655659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.868054  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:51:10.098163  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:51:10.098324  522590 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0916 23:51:10.152880  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.154875  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.367834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:10.652251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.655021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.655084  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.867384  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.151842  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.155150  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.368186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.652269  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.655256  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.867128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.152667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.155107  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.652518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.654870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.867312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.151982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.155271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.155332  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.367823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.652387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.654951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.868844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.153334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.155643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.155904  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.368482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.652515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.655724  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.152601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.155604  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.652539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.655836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.655906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.868440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.151573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.154807  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.368168  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.652042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.655560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.655747  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.151965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.155140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.155210  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.368464  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.652037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.655823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.867935  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.152022  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.155517  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.367482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.651927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.654865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.655024  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.868282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.151370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.155878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.155924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.651943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.868827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.151845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.155066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.155072  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.369339  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.651811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.654774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.654963  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.867983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.152276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.154893  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.367794  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.652538  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.654934  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.654939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.867898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.151949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.155295  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.155445  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.367407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.651590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.655019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.867887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.152190  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.155502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.155545  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.367753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.652562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.654651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.152073  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.155610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.367957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.868057  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.152408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.155409  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.155602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.652052  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.655209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.655312  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.151535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.155823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.155856  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.651651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.654990  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.867537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.152091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.155112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.155142  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.654137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.656355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.656515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.869096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.154581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.154673  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.367987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.652294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.654753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.654853  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.869651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.154807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.154850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.368887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.654241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.655196  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.151919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.155232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.155296  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.367463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.867385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.151552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.154871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.154947  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.369090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.652787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.654631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.869965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.152268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.154797  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.154858  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.368137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.654729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.654778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.868357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.151932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.155182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.155339  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.367560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.651975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.655413  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.867981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.152479  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.155002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.155059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.368688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.651549  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.655000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.655063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.868189  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.151809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.155205  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.155350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.367322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.651627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.752333  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.752426  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.868016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.155466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.368191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.654883  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.868252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.152153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.155806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.155969  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.368131  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.652021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.655754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.655968  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.869697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.152009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.155144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.155151  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.369995  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.652185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.655536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.655553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.867639  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.151740  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.154964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.155029  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.368608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.651802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.654961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.869716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.152077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.155323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.155354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.367481  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.651750  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.655154  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.867047  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.152227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.154790  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.154936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.367727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.655578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.655618  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.869685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.152239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.154748  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.367986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.654796  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.868157  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.151984  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.155093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.155268  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.367574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.652278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.867108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.151635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.155169  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.367632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.656348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.656416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.867492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.155082  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.368046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.652581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.655278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.655440  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.867304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.151985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.155139  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.367275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.652201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.654659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.654708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.867813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.152102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.368132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.652347  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.654903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.654929  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.868615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.151762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.154894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.155015  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.367728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.652716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.655105  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.655114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.867844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.151899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.367647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.651960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.655182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.867701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.152323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.368036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.652752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.655140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.867998  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.152002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.155125  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.155152  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.652049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.655522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.655726  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.868294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.151791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.155565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.367865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.652161  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.655672  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.868579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.151650  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.154924  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.155034  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.369092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.651132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.655513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.655522  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.868691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.152450  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.155354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.155524  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.367600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.651882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.655373  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.655408  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.867056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.152214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.154682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.154691  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.367828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.652289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.654838  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.654919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.868482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.155680  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.367605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.652000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.655628  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.867754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.152556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.155095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.367975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.654741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.868401  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.153486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.155941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.156005  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.652886  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.654744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.867833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.152068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.155056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.368282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.651560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.654879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.654906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.868124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.151834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.368228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.654864  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.655039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.152355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.155216  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.155250  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.367206  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.651490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.655688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.655736  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.868528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.152001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.155683  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.367787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.652284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.654849  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.868355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.151870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.155589  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.369165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.655412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.655514  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.867952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.152595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.154738  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.154768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.368177  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.651492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.654766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.654890  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.867847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.155407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.155591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.367682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.652426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.655066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.655077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.868692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.151879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.154999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.368983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.652433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.655105  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.867405  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.151744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.651596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.654914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.655059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.868458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.152215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.154616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.154655  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.367845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.652783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.655112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.655120  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.151544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.155208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.155226  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.367504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.652199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.152537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.155961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.155972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.652499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.655560  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.153765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.156270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.156301  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.367137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.652938  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.655212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.655254  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.867526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.152762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.155539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.155611  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.367745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.653490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.655575  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.867930  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.152233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.154692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.154928  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.368718  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.655076  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.868860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.152353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.154742  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.367623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.655140  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.867455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.151851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.155247  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.367164  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.652193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.655452  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.655496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.867913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.152181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.155667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.155764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.368289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.651762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.654913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.654985  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.868273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.152523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.155730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.156762  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.369278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.653153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.656847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.656957  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.872367  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.152950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.155208  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.368554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.652083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.656110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.656132  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.867845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.152657  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.155336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.155360  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.367646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.652603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.655013  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.655062  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.868632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.151907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.155327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.155416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.367287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.651614  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.654920  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.867932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.155722  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.367894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.652307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.654756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.654995  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.869050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.151999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.155129  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.155241  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.367234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.655801  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.867063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.152370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.368226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.651514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.654966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.654979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.867379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.152074  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.155478  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.155627  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.367613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.651861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.655241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.655314  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.867408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.151695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.155047  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.368563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.655425  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.867208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.151957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.156991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.157177  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.367383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.651982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.655413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.655465  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.867368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.151925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.154970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.155019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.368160  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.651611  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.654847  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.654859  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.867942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.152874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.154630  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.154694  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.368049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.651257  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.655624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.655667  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.867801  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.152524  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.156020  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.156108  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.651663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.655003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.655207  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.867344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.152248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.154952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.155114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.368836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.652345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.868484  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.151558  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.154855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.154863  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.368442  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.651568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.655180  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.868266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.151815  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.155240  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.367272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.651711  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.655134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.655194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.867490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.151598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.155259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.367609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.651854  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.655324  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.867858  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.153080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.155098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.155341  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.367674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.651945  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.655335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.655353  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.151897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.155637  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.155683  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.367456  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.652090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.655528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.655648  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.152606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.154994  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.368455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.652303  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.655073  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.867363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.151724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.155569  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.367351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.651839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.655606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.868338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.152142  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.155217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.155532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.368358  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.651898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.655540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.655567  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.868334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.154861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.154907  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.368768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.652068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.655573  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.869959  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.152619  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.154675  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.367925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.868289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.152483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.154991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.155032  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.368359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.651646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.655296  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.867137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.152187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.155835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.155854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.367912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.652016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.655327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.867319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.151608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.154828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.155016  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.368488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.653811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.656445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.656565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.867120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.152791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.154576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.154723  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.367602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.651437  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.655676  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.867828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.152180  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.155737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.155763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.367992  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.652246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.654603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.868092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.152800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.154702  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.154910  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.367595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.654693  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.867547  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.151877  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.155211  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.367273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.651756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.655367  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.867318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.151786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.155034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.155115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.651521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.655726  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.655766  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.868163  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.151496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.155243  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.366955  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.652531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.655173  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.655184  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.867097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.152201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.155505  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.155636  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.367562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.651843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.655301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.655384  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.868028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.152914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.155252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.155462  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.367149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.651713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.655354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.655450  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.867440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.151891  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.368461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.901721  522590 kapi.go:107] duration metric: took 3m34.537544348s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 23:52:52.906543  522590 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-069011 cluster.
	I0916 23:52:52.912324  522590 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 23:52:52.913737  522590 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 23:52:53.153197  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:53.155666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.652828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.655014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.655110  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.152324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.155476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.155496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.655581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.655609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.152128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.155885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.156039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.652641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.654978  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.152674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.154874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.155000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.652035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.655457  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.655496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:57.152186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.155542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:57.155561  522590 kapi.go:107] duration metric: took 3m45.503354476s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 23:52:57.652350  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.655498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.154850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.652665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.152543  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.154283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.653277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.659941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.152852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.154649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.652327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.654800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.154525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.651817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.655138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.653502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.656037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.151857  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.155055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.652334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.152174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.155870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.653124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.153568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.155625  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.653230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.655236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.152361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.154928  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.653059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.656200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.152336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.652346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.655712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.653610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.152628  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.154934  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.655144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.154348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.155986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.652369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.152148  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.155670  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.652553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.655243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.152796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.155106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.651747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.655634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.153010  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.155374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.654738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.656482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.152952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.652523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.152364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.155721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.655954  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.656795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.152967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.154926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.653027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.153039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.653034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.156123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.651828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.151648  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.652222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.654551  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.655101  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.651672  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.655009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.152329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.652063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.655272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.152182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.155422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.652218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.654560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.152574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.155253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.652502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.151663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.155115  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.655044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.152383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.155509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.652354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.654747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.169011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.169001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.653424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.655714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.152979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.254144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.651804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.655470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.151827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.155108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.652422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.152193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.155976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.652210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.654980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.151709  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.155038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.651589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.655050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.151868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.155145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.652363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.655892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.151643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.154810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.653583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.655279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.153153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.155522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.652584  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.151580  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.156561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.652732  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.655133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.155361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.158601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.652275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.654674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.153755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.155714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.652926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.151466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.154733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.653313  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.655745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.152234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.155638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.652445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.654541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.152461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.652312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.654686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.155170  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.651644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.152309  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.154360  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.654550  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.151904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.154960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.652091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.655542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.151570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.652708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.654522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.151593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.154608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.651922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.151376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.155482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.151782  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.154824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.652429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.152137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.154936  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.651792  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.654929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.152207  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.652077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.655059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.152055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.155283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.654677  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.152004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.154803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.653046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.654923  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.154978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.651950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.654986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.151595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.154725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.652661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.654540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.155079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.652239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.654476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.151772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.155226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.655124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.151415  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.155604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.652777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.152275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.155829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.653025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.152978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.154716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.652635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.152070  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.155270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.652577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.655424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.152756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.154426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.651964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.655181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.151369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.155561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.651593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.654586  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.152252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.654423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.152030  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.155167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.651855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.654881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.151556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.654500  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.152255  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.154344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.652483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.655325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.151729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.154664  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.652904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.654681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.152267  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.652291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.151577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.154865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.654618  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.152302  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.154688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.653092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.654963  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.151758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.154735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.652999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.154498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.654909  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.151298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.155557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.652643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.654491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.152751  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.652126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.655183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.151763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.155046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.152658  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.154758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.652985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.655060  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.151705  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.154775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.652773  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.654589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.152592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.155097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.651889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.152217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.652903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.152686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.154506  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.652260  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.654251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.154777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.652915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.152381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.155278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.651555  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.152695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.652919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.151929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.155096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.652215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.654600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.152243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.154806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.655336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.151915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.154836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.152467  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.653379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.655466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.151800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.155291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.653102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.153140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.654838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.153210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.155329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.152491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.154729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.653037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.654741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.152830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.154474  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.652230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.654509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.151920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.154827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.653191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.655219  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.151306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.155960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.651717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.655110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.152304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.154575  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.652514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.654778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.652961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.654330  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.655691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.152418  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.154851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.651435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.654582  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.153087  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.155042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.654583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.152997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.154432  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.652600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.152066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.651875  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.655064  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.152238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.154411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.655370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.152256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.154799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.652896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.655256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.152778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.154615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.652772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.654597  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.152798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.155091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.652248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.654728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.152282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.154468  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.652120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.655482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.151671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.653242  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.654823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.152812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.152839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.155119  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.652214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.654840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.152996  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.155254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.651623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.153897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.155803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.652443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.654867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.152374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.154640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.653033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.654888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.152649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.154604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.652521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.654615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.152209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.154579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.652590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.654414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.651951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.655307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.151878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.651739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.654805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.152326  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.154364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.654812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.152821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.154939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.651434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.152103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.155132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.655072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.154539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.155149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.654796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.151638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.154787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.652885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.152069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.652069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.655407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.152172  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.156173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.652301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.654808  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.153293  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.155684  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.652844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.654749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.652609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.151757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.652511  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.654688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.152258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.154829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.653049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.151579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.154591  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.652331  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.654994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.151784  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.154921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.655067  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.151900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.155072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.651978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.655300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.151961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.154914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.654644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.152090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.155188  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.652025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.652821  522590 kapi.go:107] duration metric: took 6m0.000625805s to wait for kubernetes.io/minikube-addons=registry ...
	W0916 23:55:11.652991  522590 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0916 23:55:12.148606  522590 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0916 23:55:12.148655  522590 kapi.go:107] duration metric: took 6m0.000415083s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0916 23:55:12.148771  522590 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0916 23:55:12.151062  522590 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, volumesnapshots, gcp-auth, ingress
	I0916 23:55:12.152575  522590 addons.go:514] duration metric: took 6m2.25568849s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher cloud-spanner metrics-server yakd volumesnapshots gcp-auth ingress]
	I0916 23:55:12.152638  522590 start.go:246] waiting for cluster config update ...
	I0916 23:55:12.152661  522590 start.go:255] writing updated cluster config ...
	I0916 23:55:12.152955  522590 ssh_runner.go:195] Run: rm -f paused
	I0916 23:55:12.157549  522590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:12.161141  522590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.165703  522590 pod_ready.go:94] pod "coredns-66bc5c9577-m872b" is "Ready"
	I0916 23:55:12.165731  522590 pod_ready.go:86] duration metric: took 4.567019ms for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.168067  522590 pod_ready.go:83] waiting for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.172550  522590 pod_ready.go:94] pod "etcd-addons-069011" is "Ready"
	I0916 23:55:12.172583  522590 pod_ready.go:86] duration metric: took 4.489308ms for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.174872  522590 pod_ready.go:83] waiting for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.179401  522590 pod_ready.go:94] pod "kube-apiserver-addons-069011" is "Ready"
	I0916 23:55:12.179432  522590 pod_ready.go:86] duration metric: took 4.532992ms for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.181473  522590 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.561817  522590 pod_ready.go:94] pod "kube-controller-manager-addons-069011" is "Ready"
	I0916 23:55:12.561846  522590 pod_ready.go:86] duration metric: took 380.349392ms for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.763149  522590 pod_ready.go:83] waiting for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.161850  522590 pod_ready.go:94] pod "kube-proxy-v85kq" is "Ready"
	I0916 23:55:13.161880  522590 pod_ready.go:86] duration metric: took 398.696904ms for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.362802  522590 pod_ready.go:83] waiting for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761895  522590 pod_ready.go:94] pod "kube-scheduler-addons-069011" is "Ready"
	I0916 23:55:13.761929  522590 pod_ready.go:86] duration metric: took 399.094008ms for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761944  522590 pod_ready.go:40] duration metric: took 1.604356273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:13.810173  522590 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:55:13.812279  522590 out.go:179] * Done! kubectl is now configured to use "addons-069011" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:06:10 addons-069011 crio[933]: time="2025-09-17 00:06:10.939434662Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb to CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:06:10 addons-069011 crio[933]: time="2025-09-17 00:06:10.951130119Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb Namespace:local-path-storage ID:5e3c4ae1d22b433f1c4812f5e83d15f1ba84cbd08e90353e48c70de4ae5019d5 UID:de6c504b-6eb1-4731-8d69-f050d70230ed NetNS:/var/run/netns/eaf84c39-7019-4815-9360-9d92491ecad9 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:06:10 addons-069011 crio[933]: time="2025-09-17 00:06:10.951273786Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb for CNI network kindnet (type=ptp)"
	Sep 17 00:06:10 addons-069011 crio[933]: time="2025-09-17 00:06:10.952211368Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 17 00:06:10 addons-069011 crio[933]: time="2025-09-17 00:06:10.952991365Z" level=info msg="Ran pod sandbox 5e3c4ae1d22b433f1c4812f5e83d15f1ba84cbd08e90353e48c70de4ae5019d5 with infra container: local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb/POD" id=6df5a339-9924-43ed-9efe-351c9c4b2ed2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:06:10 addons-069011 crio[933]: time="2025-09-17 00:06:10.954421589Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f767b13c-dd4a-49cc-bb55-8a26c902fc5a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:10 addons-069011 crio[933]: time="2025-09-17 00:06:10.954769149Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f767b13c-dd4a-49cc-bb55-8a26c902fc5a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:22 addons-069011 crio[933]: time="2025-09-17 00:06:22.174773721Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e3851b4a-ba27-4ad4-8982-4819511ed358 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:22 addons-069011 crio[933]: time="2025-09-17 00:06:22.175089765Z" level=info msg="Image docker.io/nginx:alpine not found" id=e3851b4a-ba27-4ad4-8982-4819511ed358 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:37 addons-069011 crio[933]: time="2025-09-17 00:06:37.174677938Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=760d2cf3-447e-4c8c-a1c6-7bddb809880a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:37 addons-069011 crio[933]: time="2025-09-17 00:06:37.174957993Z" level=info msg="Image docker.io/nginx:alpine not found" id=760d2cf3-447e-4c8c-a1c6-7bddb809880a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:39 addons-069011 crio[933]: time="2025-09-17 00:06:39.192112075Z" level=info msg="Pulling image: docker.io/nginx:latest" id=149a22a9-14d6-43c1-afe8-41831cc1af6f name=/runtime.v1.ImageService/PullImage
	Sep 17 00:06:39 addons-069011 crio[933]: time="2025-09-17 00:06:39.196746147Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 17 00:06:51 addons-069011 crio[933]: time="2025-09-17 00:06:51.174369693Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=941e5856-2e49-4606-b9ee-bcaac3f5cce7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:51 addons-069011 crio[933]: time="2025-09-17 00:06:51.174627138Z" level=info msg="Image docker.io/nginx:alpine not found" id=941e5856-2e49-4606-b9ee-bcaac3f5cce7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:54 addons-069011 crio[933]: time="2025-09-17 00:06:54.176054739Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=3a61d3e3-efb3-4198-a84e-16396910ae12 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:06:54 addons-069011 crio[933]: time="2025-09-17 00:06:54.176444648Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=3a61d3e3-efb3-4198-a84e-16396910ae12 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:02 addons-069011 crio[933]: time="2025-09-17 00:07:02.174686001Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=96472d44-52f7-405b-bf88-cadc1f460a52 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:02 addons-069011 crio[933]: time="2025-09-17 00:07:02.174975840Z" level=info msg="Image docker.io/nginx:alpine not found" id=96472d44-52f7-405b-bf88-cadc1f460a52 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:06 addons-069011 crio[933]: time="2025-09-17 00:07:06.173983666Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=a1e9cfa4-a05e-4809-82b0-699d45fa473a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:06 addons-069011 crio[933]: time="2025-09-17 00:07:06.174445586Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=a1e9cfa4-a05e-4809-82b0-699d45fa473a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:09 addons-069011 crio[933]: time="2025-09-17 00:07:09.284726746Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=1ef84c15-9f34-4a1f-bdb0-c9e6b6a98c3e name=/runtime.v1.ImageService/PullImage
	Sep 17 00:07:09 addons-069011 crio[933]: time="2025-09-17 00:07:09.287675302Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 17 00:07:14 addons-069011 crio[933]: time="2025-09-17 00:07:14.176382336Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d4c12d02-0f5e-46d2-b418-0ce4cde888e6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:14 addons-069011 crio[933]: time="2025-09-17 00:07:14.176681855Z" level=info msg="Image docker.io/nginx:alpine not found" id=d4c12d02-0f5e-46d2-b418-0ce4cde888e6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8fc15d8cb7dd5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          8 minutes ago       Running             csi-snapshotter                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	295b9edc02db1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          10 minutes ago      Running             csi-provisioner                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	3bebfc3ce5f89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          10 minutes ago      Running             busybox                                  0                   b34e9dc849123       busybox
	0994d530b2186       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            11 minutes ago      Running             liveness-probe                           0                   e614fc1047195       csi-hostpathplugin-s98vb
	d78ede218b3d9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           12 minutes ago      Running             hostpath                                 0                   e614fc1047195       csi-hostpathplugin-s98vb
	16a4495ac9a55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                14 minutes ago      Running             node-driver-registrar                    0                   e614fc1047195       csi-hostpathplugin-s98vb
	cb0aaa55cf5e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            14 minutes ago      Running             gadget                                   0                   38b62a86f7523       gadget-g862x
	75b35093f1f14       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              15 minutes ago      Running             registry-proxy                           0                   f2e835ff4c172       registry-proxy-gtpv9
	af48fae595f24       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      16 minutes ago      Running             volume-snapshot-controller               0                   7daa29e729a88       snapshot-controller-7d9fbc56b8-st98r
	fce1ccd8d33b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   16 minutes ago      Running             csi-external-health-monitor-controller   0                   e614fc1047195       csi-hostpathplugin-s98vb
	3c653d4c50b5c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      16 minutes ago      Running             volume-snapshot-controller               0                   4be25aad82a4e       snapshot-controller-7d9fbc56b8-s7m82
	0957eacca23bd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              16 minutes ago      Running             csi-resizer                              0                   b8131d2ee78de       csi-hostpath-resizer-0
	ad4a09c21105c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             17 minutes ago      Running             csi-attacher                             0                   15f9a9c33b53e       csi-hostpath-attacher-0
	c1b11b9e2fae1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             17 minutes ago      Running             local-path-provisioner                   0                   be69758a594c2       local-path-provisioner-648f6765c9-4qs6g
	7d0db99be084d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             17 minutes ago      Running             storage-provisioner                      0                   e26878809420e       storage-provisioner
	b62ac7b1e2d93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             17 minutes ago      Running             coredns                                  0                   90cd65a058e3e       coredns-66bc5c9577-m872b
	81f4db589dfd0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             18 minutes ago      Running             kindnet-cni                              0                   282dceccf27e4       kindnet-hn7tx
	8204c89cdc90d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             18 minutes ago      Running             kube-proxy                               0                   076ce47b67764       kube-proxy-v85kq
	d1d2d3ef1a2d6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             18 minutes ago      Running             kube-controller-manager                  0                   2befa508c819b       kube-controller-manager-addons-069011
	f4991aa96dbe9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             18 minutes ago      Running             kube-apiserver                           0                   24f1de8dafedd       kube-apiserver-addons-069011
	ecbc264153ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             18 minutes ago      Running             kube-scheduler                           0                   3af000cb5a57c       kube-scheduler-addons-069011
	5a81076e6d9a8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             18 minutes ago      Running             etcd                                     0                   f590790ed13d4       etcd-addons-069011
	
	
	==> coredns [b62ac7b1e2d935063ca8c0594642886e49ad0423507f04d148e7bd385ca935ce] <==
	[INFO] 10.244.0.16:40891 - 62455 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006004356s
	[INFO] 10.244.0.16:40891 - 42316 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000099299s
	[INFO] 10.244.0.16:40891 - 30396 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000058907s
	[INFO] 10.244.0.16:40891 - 33856 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000093316s
	[INFO] 10.244.0.16:40891 - 53324 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000121329s
	[INFO] 10.244.0.16:40891 - 52084 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.00007194s
	[INFO] 10.244.0.16:40891 - 39391 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000095315s
	[INFO] 10.244.0.16:40891 - 65241 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000173559s
	[INFO] 10.244.0.16:40891 - 37623 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000169665s
	[INFO] 10.244.0.16:42864 - 43805 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000172394s
	[INFO] 10.244.0.16:42864 - 39934 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000258947s
	[INFO] 10.244.0.16:42864 - 1203 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000126592s
	[INFO] 10.244.0.16:42864 - 37825 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00014194s
	[INFO] 10.244.0.16:42864 - 26769 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000084095s
	[INFO] 10.244.0.16:42864 - 29677 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000117192s
	[INFO] 10.244.0.16:42864 - 33177 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00395323s
	[INFO] 10.244.0.16:42864 - 18001 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005148731s
	[INFO] 10.244.0.16:42864 - 34135 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000087068s
	[INFO] 10.244.0.16:42864 - 3939 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000076808s
	[INFO] 10.244.0.16:42864 - 31856 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000064814s
	[INFO] 10.244.0.16:42864 - 30560 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000059355s
	[INFO] 10.244.0.16:42864 - 20815 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000045368s
	[INFO] 10.244.0.16:42864 - 4311 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000066653s
	[INFO] 10.244.0.16:42864 - 45274 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00010971s
	[INFO] 10.244.0.16:42864 - 20456 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00013627s
	
	
	==> describe nodes <==
	Name:               addons-069011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-069011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-069011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-069011
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-069011"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:49:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-069011
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:07:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-069011
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e6a06e1e17043f19f3b8f5ea0927359
	  System UUID:                fa23b867-4022-409a-8baa-bf981ffedafe
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-g862x                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-m872b                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpathplugin-s98vb                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-addons-069011                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-hn7tx                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-addons-069011                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-069011                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-v85kq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-069011                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-66898fdd98-bl4r5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-proxy-gtpv9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-7d9fbc56b8-s7m82                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 snapshot-controller-7d9fbc56b8-st98r                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  local-path-storage          helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  local-path-storage          local-path-provisioner-648f6765c9-4qs6g                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-069011 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-069011 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-069011 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node addons-069011 event: Registered Node addons-069011 in Controller
	  Normal  NodeReady                17m   kubelet          Node addons-069011 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [5a81076e6d9a8c9983866e09b1190810cd0059c34edeae1a479f9d18f3003a91] <==
	{"level":"warn","ts":"2025-09-16T23:49:01.021210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.034514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.041663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.048524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.061680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.068240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.075225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.081757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.105206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.111554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.154896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.666348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.673196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.575058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.581784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.598000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.605378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:59:00.630787Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1449}
	{"level":"info","ts":"2025-09-16T23:59:00.656834Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1449,"took":"25.282457ms","hash":3232880921,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3645440,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-09-16T23:59:00.656898Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3232880921,"revision":1449,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:04:00.635503Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2177}
	{"level":"info","ts":"2025-09-17T00:04:00.654518Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2177,"took":"18.415625ms","hash":2584493315,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3166208,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-09-17T00:04:00.654575Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2584493315,"revision":2177,"compact-revision":1449}
	
	
	==> kernel <==
	 00:07:15 up  2:49,  0 users,  load average: 0.22, 1.75, 22.18
	Linux addons-069011 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [81f4db589dfd0f8f014a7fc056f2d7f752ecc52737aea10ae2f8a98d0242428b] <==
	I0917 00:05:10.188318       1 main.go:301] handling current node
	I0917 00:05:20.188575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:20.188631       1 main.go:301] handling current node
	I0917 00:05:30.190163       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:30.190214       1 main.go:301] handling current node
	I0917 00:05:40.184609       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:40.184669       1 main.go:301] handling current node
	I0917 00:05:50.191808       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:05:50.191858       1 main.go:301] handling current node
	I0917 00:06:00.189633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:00.189679       1 main.go:301] handling current node
	I0917 00:06:10.189374       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:10.189444       1 main.go:301] handling current node
	I0917 00:06:20.190624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:20.190663       1 main.go:301] handling current node
	I0917 00:06:30.189792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:30.189835       1 main.go:301] handling current node
	I0917 00:06:40.186534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:40.186588       1 main.go:301] handling current node
	I0917 00:06:50.191534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:50.191568       1 main.go:301] handling current node
	I0917 00:07:00.183932       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:00.183965       1 main.go:301] handling current node
	I0917 00:07:10.187158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:10.187200       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4991aa96dbe98af7f934784cdc7973d5aabec72325938f0e98ad8efde3d06e3] <==
	I0916 23:56:24.856015       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0916 23:56:38.562764       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43110: use of closed network connection
	E0916 23:56:38.758708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43158: use of closed network connection
	I0916 23:56:47.547088       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 23:56:47.750812       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.94.177"}
	I0916 23:56:48.077381       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.184.141"}
	I0916 23:56:56.387694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0916 23:56:58.875443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:57:28.517320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:21.717919       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:53.740979       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:01.561467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:59:46.839359       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:03.548840       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:10.960424       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:15.531695       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:28.446522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:31.841808       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:34.885369       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:39.392704       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:37.349511       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:53.960048       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:05:53.849137       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:16.946188       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:09.469741       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [d1d2d3ef1a2d61d604d7b7b71875c31a98127791ebbcaaae9e7c5dcebb1fd036] <==
	I0916 23:49:08.559424       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:49:08.560582       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0916 23:49:08.560682       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0916 23:49:08.562044       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:49:08.562105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:49:08.562171       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:49:08.562209       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:49:08.562217       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:49:08.562221       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:49:08.563325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:08.564561       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:49:08.570797       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-069011" podCIDRs=["10.244.0.0/24"]
	I0916 23:49:08.576824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0916 23:49:38.568454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 23:49:38.568633       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0916 23:49:38.568684       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0916 23:49:38.586865       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0916 23:49:38.591210       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0916 23:49:38.668805       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:38.692110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:49:53.514314       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 23:56:52.202912       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0916 23:58:53.764380       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0917 00:01:02.592919       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0917 00:05:02.667430       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [8204c89cdc90d58370aa745a3053c12e5b976409a1e0bedddf9508ac3e770c1f] <==
	I0916 23:49:09.803647       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:49:09.874911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:49:09.984976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:49:09.985628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:49:09.986296       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:49:10.154642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:49:10.159433       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:49:10.183201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:49:10.195463       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:49:10.195513       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:49:10.199563       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:49:10.199664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:49:10.200188       1 config.go:309] "Starting node config controller"
	I0916 23:49:10.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:49:10.200334       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:49:10.200369       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:49:10.200991       1 config.go:200] "Starting service config controller"
	I0916 23:49:10.201078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:49:10.299859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:49:10.300474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:49:10.300501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:49:10.302086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ecbc264153ff2a219390febac6665f8efc1a49ab24db502b79ba6888e6bd5b71] <==
	E0916 23:49:01.591306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0916 23:49:01.591979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:01.591995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:01.592038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:01.592032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0916 23:49:01.592058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:01.592081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0916 23:49:01.592128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:01.592273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:01.592272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:49:01.592315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0916 23:49:02.478666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0916 23:49:02.478742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0916 23:49:02.495998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:02.533597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:02.645572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:02.658831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0916 23:49:02.700650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:02.730028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:02.731014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0916 23:49:02.807698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:49:02.811032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:02.813063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0916 23:49:02.832467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0916 23:49:05.387364       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:06:24 addons-069011 kubelet[1557]: E0917 00:06:24.408404    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067584408082886  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:06:34 addons-069011 kubelet[1557]: E0917 00:06:34.410208    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067594409930557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:06:34 addons-069011 kubelet[1557]: E0917 00:06:34.410251    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067594409930557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:06:37 addons-069011 kubelet[1557]: E0917 00:06:37.175285    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:06:39 addons-069011 kubelet[1557]: E0917 00:06:39.191550    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d"
	Sep 17 00:06:39 addons-069011 kubelet[1557]: E0917 00:06:39.191626    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d"
	Sep 17 00:06:39 addons-069011 kubelet[1557]: E0917 00:06:39.191915    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container registry start failed in pod registry-66898fdd98-bl4r5_kube-system(34782a61-58ac-458e-ab2f-7a22bac44c65): ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:06:39 addons-069011 kubelet[1557]: E0917 00:06:39.191986    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ErrImagePull: \"reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:06:44 addons-069011 kubelet[1557]: E0917 00:06:44.411926    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067604411682936  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:06:44 addons-069011 kubelet[1557]: E0917 00:06:44.411958    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067604411682936  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:06:51 addons-069011 kubelet[1557]: E0917 00:06:51.174947    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:06:54 addons-069011 kubelet[1557]: E0917 00:06:54.176827    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:06:54 addons-069011 kubelet[1557]: E0917 00:06:54.414425    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067614414160550  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:06:54 addons-069011 kubelet[1557]: E0917 00:06:54.414460    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067614414160550  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:02 addons-069011 kubelet[1557]: E0917 00:07:02.175313    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:07:04 addons-069011 kubelet[1557]: E0917 00:07:04.416628    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067624416385106  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:04 addons-069011 kubelet[1557]: E0917 00:07:04.416658    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067624416385106  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:06 addons-069011 kubelet[1557]: E0917 00:07:06.174772    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:07:09 addons-069011 kubelet[1557]: E0917 00:07:09.284236    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:07:09 addons-069011 kubelet[1557]: E0917 00:07:09.284300    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:07:09 addons-069011 kubelet[1557]: E0917 00:07:09.284546    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0b15e693-4577-4039-b409-5badaa871bfc): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:07:09 addons-069011 kubelet[1557]: E0917 00:07:09.284606    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:07:14 addons-069011 kubelet[1557]: E0917 00:07:14.176979    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:07:14 addons-069011 kubelet[1557]: E0917 00:07:14.419323    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067634418972388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:14 addons-069011 kubelet[1557]: E0917 00:07:14.419430    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067634418972388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	
	
	==> storage-provisioner [7d0db99be084d7a7996f085af51ba0b4b9263d1a30c5ba98cac79995b3641b35] <==
	W0917 00:06:49.554007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:51.557590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:51.562817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:53.566608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:53.570808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:55.574693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:55.580239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:57.583960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:57.588294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:59.591666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:06:59.597384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:01.601605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:01.606244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:03.609678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:03.613664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:05.617196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:05.621130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:07.624968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:07.630662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:09.633564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:09.637929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:11.642159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:11.648883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:13.652898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:13.656864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
helpers_test.go:269: (dbg) Run:  kubectl --context addons-069011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1 (86.0945ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Tue, 16 Sep 2025 23:56:47 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kksmh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kksmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx to addons-069011
	  Normal   Pulling    2m37s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     67s (x5 over 8m43s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     67s (x5 over 8m43s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x15 over 8m43s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2s (x15 over 8m43s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:01:13 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfz5d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-rfz5d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-069011
	  Normal   BackOff    89s (x5 over 4m38s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     89s (x5 over 4m38s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    75s (x4 over 6m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7s (x4 over 4m38s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7s (x4 over 4m38s)   kubelet            Error: ErrImagePull
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s54zg (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-s54zg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-66898fdd98-bl4r5" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable csi-hostpath-driver --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/CSI (373.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-069011 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-069011 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-069011 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.167µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-069011
helpers_test.go:243: (dbg) docker inspect addons-069011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	        "Created": "2025-09-16T23:48:50.029636255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:48:50.075029861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hosts",
	        "LogPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1-json.log",
	        "Name": "/addons-069011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-069011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-069011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	                "LowerDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-069011",
	                "Source": "/var/lib/docker/volumes/addons-069011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-069011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-069011",
	                "name.minikube.sigs.k8s.io": "addons-069011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7ea0b62281ff8981f73b140342aff58601fbb663df7278dfdd6743a41abcca5",
	            "SandboxKey": "/var/run/docker/netns/f7ea0b62281f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-069011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:4c:3e:1e:87:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d62ec0fa3bfb3ffd62859a508f03996c549db14f34473599ddd1b9022067b7b9",
	                    "EndpointID": "f8f4fe858390c8f96bc24eec26736fad3a3b1ba30f09e93e016a6a79e947f7af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-069011",
	                        "678205c9d470"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-069011 -n addons-069011
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 logs -n 25: (1.428586387s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p download-docker-660125 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p download-docker-660125                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p binary-mirror-785971 --alsologtostderr --binary-mirror http://127.0.0.1:38515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p binary-mirror-785971                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ addons  │ enable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ start   │ -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:55 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ enable headlamp -p addons-069011 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                           │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:57 UTC │
	│ addons  │ addons-069011 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ addons  │ addons-069011 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:00 UTC │ 17 Sep 25 00:00 UTC │
	│ addons  │ addons-069011 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ addons  │ addons-069011 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-069011 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-069011 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-069011 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ addons  │ addons-069011 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:04 UTC │ 17 Sep 25 00:04 UTC │
	│ addons  │ addons-069011 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:07 UTC │ 17 Sep 25 00:07 UTC │
	│ addons  │ addons-069011 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:07 UTC │ 17 Sep 25 00:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:27.723751  522590 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:27.723864  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.723869  522590 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:27.723873  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.724066  522590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0916 23:48:27.724618  522590 out.go:368] Setting JSON to false
	I0916 23:48:27.725494  522590 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9051,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:27.725585  522590 start.go:140] virtualization: kvm guest
	I0916 23:48:27.728073  522590 out.go:179] * [addons-069011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:27.729850  522590 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:48:27.729868  522590 notify.go:220] Checking for updates...
	I0916 23:48:27.733822  522590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:27.736141  522590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:27.738039  522590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:27.740423  522590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:48:27.743368  522590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:48:27.746574  522590 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:27.771724  522590 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:27.771874  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.829971  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.818365984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.830249  522590 docker.go:318] overlay module found
	I0916 23:48:27.832946  522590 out.go:179] * Using the docker driver based on user configuration
	I0916 23:48:27.834751  522590 start.go:304] selected driver: docker
	I0916 23:48:27.834826  522590 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:27.834849  522590 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:48:27.835571  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.897913  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.886229333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.898100  522590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:27.898315  522590 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:48:27.900183  522590 out.go:179] * Using Docker driver with root privileges
	I0916 23:48:27.901481  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:27.901597  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:27.901613  522590 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:48:27.901710  522590 start.go:348] cluster config:
	{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0916 23:48:27.903324  522590 out.go:179] * Starting "addons-069011" primary control-plane node in "addons-069011" cluster
	I0916 23:48:27.904623  522590 cache.go:123] Beginning downloading kic base image for docker with crio
	I0916 23:48:27.905841  522590 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:48:27.907270  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:27.907330  522590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:48:27.907328  522590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:27.907354  522590 cache.go:58] Caching tarball of preloaded images
	I0916 23:48:27.907495  522590 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 23:48:27.907513  522590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:48:27.907895  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:27.907924  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json: {Name:mk15dc7feab5fd17bb004b2e5f6ac3bc55ac0d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:27.925199  522590 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:27.925352  522590 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:48:27.925371  522590 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:48:27.925375  522590 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:48:27.925383  522590 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:48:27.925403  522590 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0916 23:48:40.932191  522590 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0916 23:48:40.932224  522590 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:48:40.932259  522590 start.go:360] acquireMachinesLock for addons-069011: {Name:mk9387b718f452cc25627a84d4c20b7f46084ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:48:40.932371  522590 start.go:364] duration metric: took 90.542µs to acquireMachinesLock for "addons-069011"
	I0916 23:48:40.932411  522590 start.go:93] Provisioning new machine with config: &{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:48:40.932527  522590 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:48:40.934531  522590 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0916 23:48:40.934774  522590 start.go:159] libmachine.API.Create for "addons-069011" (driver="docker")
	I0916 23:48:40.934810  522590 client.go:168] LocalClient.Create starting
	I0916 23:48:40.934920  522590 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0916 23:48:41.819608  522590 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0916 23:48:42.094971  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:48:42.113173  522590 cli_runner.go:211] docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:48:42.113240  522590 network_create.go:284] running [docker network inspect addons-069011] to gather additional debugging logs...
	I0916 23:48:42.113258  522590 cli_runner.go:164] Run: docker network inspect addons-069011
	W0916 23:48:42.130815  522590 cli_runner.go:211] docker network inspect addons-069011 returned with exit code 1
	I0916 23:48:42.130846  522590 network_create.go:287] error running [docker network inspect addons-069011]: docker network inspect addons-069011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-069011 not found
	I0916 23:48:42.130884  522590 network_create.go:289] output of [docker network inspect addons-069011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-069011 not found
	
	** /stderr **
	I0916 23:48:42.130990  522590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:42.149832  522590 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002180220}
	I0916 23:48:42.149931  522590 network_create.go:124] attempt to create docker network addons-069011 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:48:42.150036  522590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-069011 addons-069011
	I0916 23:48:42.212157  522590 network_create.go:108] docker network addons-069011 192.168.49.0/24 created
	I0916 23:48:42.212194  522590 kic.go:121] calculated static IP "192.168.49.2" for the "addons-069011" container
	I0916 23:48:42.212312  522590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:48:42.229867  522590 cli_runner.go:164] Run: docker volume create addons-069011 --label name.minikube.sigs.k8s.io=addons-069011 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:48:42.252846  522590 oci.go:103] Successfully created a docker volume addons-069011
	I0916 23:48:42.252968  522590 cli_runner.go:164] Run: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:48:45.649491  522590 cli_runner.go:217] Completed: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (3.39647838s)
	I0916 23:48:45.649523  522590 oci.go:107] Successfully prepared a docker volume addons-069011
	I0916 23:48:45.649558  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:45.649589  522590 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:48:45.649695  522590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:48:49.956300  522590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.306552681s)
	I0916 23:48:49.956343  522590 kic.go:203] duration metric: took 4.306749088s to extract preloaded images to volume ...
	W0916 23:48:49.956477  522590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:48:49.956523  522590 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:48:49.956572  522590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:48:50.013382  522590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-069011 --name addons-069011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-069011 --network addons-069011 --ip 192.168.49.2 --volume addons-069011:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:48:50.304600  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Running}}
	I0916 23:48:50.323420  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.342386  522590 cli_runner.go:164] Run: docker exec addons-069011 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:48:50.402276  522590 oci.go:144] the created container "addons-069011" has a running status.
	I0916 23:48:50.402326  522590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa...
	I0916 23:48:50.521235  522590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:48:50.553384  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.579068  522590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:48:50.579099  522590 kic_runner.go:114] Args: [docker exec --privileged addons-069011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:48:50.638566  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.659803  522590 machine.go:93] provisionDockerMachine start ...
	I0916 23:48:50.660411  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.680019  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.680310  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.680332  522590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:48:50.820950  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.820990  522590 ubuntu.go:182] provisioning hostname "addons-069011"
	I0916 23:48:50.821063  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.841195  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.841673  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.841710  522590 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-069011 && echo "addons-069011" | sudo tee /etc/hostname
	I0916 23:48:50.996855  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.996967  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.016407  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.016637  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.016655  522590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-069011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-069011/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-069011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:48:51.154270  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:48:51.154311  522590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0916 23:48:51.154380  522590 ubuntu.go:190] setting up certificates
	I0916 23:48:51.154420  522590 provision.go:84] configureAuth start
	I0916 23:48:51.154487  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:51.173820  522590 provision.go:143] copyHostCerts
	I0916 23:48:51.173904  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0916 23:48:51.174069  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0916 23:48:51.174140  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0916 23:48:51.174195  522590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.addons-069011 san=[127.0.0.1 192.168.49.2 addons-069011 localhost minikube]
	I0916 23:48:51.417777  522590 provision.go:177] copyRemoteCerts
	I0916 23:48:51.417839  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:48:51.417897  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.435902  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:51.535686  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 23:48:51.563321  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:48:51.590971  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:48:51.617420  522590 provision.go:87] duration metric: took 462.978002ms to configureAuth
	I0916 23:48:51.617461  522590 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:48:51.617668  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:48:51.617795  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.638144  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.638409  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.638436  522590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 23:48:51.891077  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 23:48:51.891114  522590 machine.go:96] duration metric: took 1.230812219s to provisionDockerMachine
	I0916 23:48:51.891125  522590 client.go:171] duration metric: took 10.956309615s to LocalClient.Create
	I0916 23:48:51.891146  522590 start.go:167] duration metric: took 10.956377105s to libmachine.API.Create "addons-069011"
	I0916 23:48:51.891155  522590 start.go:293] postStartSetup for "addons-069011" (driver="docker")
	I0916 23:48:51.891170  522590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:48:51.891245  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:48:51.891288  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.909900  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.010593  522590 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:48:52.014317  522590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:48:52.014357  522590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:48:52.014366  522590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:48:52.014375  522590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:48:52.014406  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0916 23:48:52.014479  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0916 23:48:52.014515  522590 start.go:296] duration metric: took 123.348567ms for postStartSetup
	I0916 23:48:52.014852  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.034024  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:52.034357  522590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:48:52.034430  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.053383  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.147697  522590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:48:52.152300  522590 start.go:128] duration metric: took 11.219755748s to createHost
	I0916 23:48:52.152322  522590 start.go:83] releasing machines lock for "addons-069011", held for 11.219940729s
	I0916 23:48:52.152383  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.170897  522590 ssh_runner.go:195] Run: cat /version.json
	I0916 23:48:52.170959  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.170960  522590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:48:52.171033  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.190054  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.190316  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.282770  522590 ssh_runner.go:195] Run: systemctl --version
	I0916 23:48:52.358127  522590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 23:48:52.500662  522590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:48:52.505640  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.530299  522590 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:48:52.530413  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.562277  522590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:48:52.562302  522590 start.go:495] detecting cgroup driver to use...
	I0916 23:48:52.562333  522590 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:48:52.562405  522590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:48:52.578904  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:48:52.592493  522590 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:48:52.592567  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:48:52.607812  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:48:52.623718  522590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:48:52.695401  522590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:48:52.772869  522590 docker.go:234] disabling docker service ...
	I0916 23:48:52.772931  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:48:52.793499  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:48:52.806446  522590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:48:52.880604  522590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:48:52.994666  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:48:53.008181  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:48:53.026581  522590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0916 23:48:53.026648  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.040463  522590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0916 23:48:53.040546  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.052415  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.063700  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.074445  522590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:48:53.085081  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.097098  522590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.114871  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.125827  522590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:48:53.135170  522590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:48:53.145546  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.253634  522590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 23:48:53.356442  522590 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 23:48:53.356540  522590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 23:48:53.360459  522590 start.go:563] Will wait 60s for crictl version
	I0916 23:48:53.360526  522590 ssh_runner.go:195] Run: which crictl
	I0916 23:48:53.364103  522590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:48:53.402094  522590 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 23:48:53.402233  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.441123  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.481919  522590 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0916 23:48:53.483462  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:53.502054  522590 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:48:53.506129  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.518646  522590 kubeadm.go:875] updating cluster {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:48:53.518762  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:53.518816  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.590933  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.590961  522590 crio.go:433] Images already preloaded, skipping extraction
	I0916 23:48:53.591020  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.627023  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.627057  522590 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:48:53.627066  522590 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0916 23:48:53.627155  522590 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-069011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:48:53.627228  522590 ssh_runner.go:195] Run: crio config
	I0916 23:48:53.674869  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:53.674893  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:53.674906  522590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:48:53.674926  522590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-069011 NodeName:addons-069011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:48:53.675093  522590 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-069011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:48:53.675157  522590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:48:53.685496  522590 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:48:53.685568  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 23:48:53.695890  522590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 23:48:53.715420  522590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:48:53.738183  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:48:53.758975  522590 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 23:48:53.763002  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.775153  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.837066  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:48:53.861100  522590 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011 for IP: 192.168.49.2
	I0916 23:48:53.861120  522590 certs.go:194] generating shared ca certs ...
	I0916 23:48:53.861145  522590 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:53.861308  522590 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0916 23:48:54.155814  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt ...
	I0916 23:48:54.155846  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt: {Name:mk009b1713fd08c38e8c6ac054b69276424ded29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156071  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key ...
	I0916 23:48:54.156093  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key: {Name:mk39b68875de7851b17692da85e287f48166d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156213  522590 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0916 23:48:54.291541  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt ...
	I0916 23:48:54.291579  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt: {Name:mk94baf5fb1a8134bb0c9a9f3d32b751fe0bf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291793  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key ...
	I0916 23:48:54.291817  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key: {Name:mk06b3e70f919971eec12f66023f6279f2a9059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291928  522590 certs.go:256] generating profile certs ...
	I0916 23:48:54.292014  522590 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key
	I0916 23:48:54.292060  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt with IP's: []
	I0916 23:48:54.529110  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt ...
	I0916 23:48:54.529147  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: {Name:mk9156e00306316f93255eae42ecd81bb5d60b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529374  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key ...
	I0916 23:48:54.529406  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key: {Name:mk15bd78effcf8815d5571a84284c31db31b997e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529525  522590 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd
	I0916 23:48:54.529556  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 23:48:54.601370  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd ...
	I0916 23:48:54.601415  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd: {Name:mkb42f86b810cddd05c27083cd910769800b1942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.602548  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd ...
	I0916 23:48:54.602578  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd: {Name:mkf41ec91a0589b4d908c830ee946e4604a6886c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.603343  522590 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt
	I0916 23:48:54.603493  522590 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key
	I0916 23:48:54.603577  522590 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key
	I0916 23:48:54.603602  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt with IP's: []
	I0916 23:48:54.685718  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt ...
	I0916 23:48:54.685751  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt: {Name:mk4c4f7fbd326f3d00c11caa86441b715a5844e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.686777  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key ...
	I0916 23:48:54.686809  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key: {Name:mkde64e1b9ef5bdc16ad6f2b11b391d65f689b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.687062  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:48:54.687107  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0916 23:48:54.687130  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:48:54.687161  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0916 23:48:54.687932  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:48:54.717259  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:48:54.744669  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:48:54.771438  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 23:48:54.799454  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 23:48:54.826220  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:48:54.853243  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:48:54.878912  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 23:48:54.905711  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:48:54.935757  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:48:54.956698  522590 ssh_runner.go:195] Run: openssl version
	I0916 23:48:54.962817  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:48:54.976805  522590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.980979  522590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.981051  522590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.988637  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:48:55.000379  522590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:48:55.004385  522590 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:48:55.004456  522590 kubeadm.go:392] StartCluster: {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:48:55.004547  522590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 23:48:55.004599  522590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:48:55.043443  522590 cri.go:89] found id: ""
	I0916 23:48:55.043525  522590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:48:55.053975  522590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:48:55.064119  522590 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:48:55.064186  522590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:48:55.074381  522590 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:48:55.074421  522590 kubeadm.go:157] found existing configuration files:
	
	I0916 23:48:55.074469  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:48:55.084667  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:48:55.084749  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:48:55.095859  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:48:55.106006  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:48:55.106068  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:48:55.115485  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.124880  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:48:55.124952  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.134292  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:48:55.144662  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:48:55.144725  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:48:55.154111  522590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:48:55.211692  522590 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:48:55.271378  522590 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:49:04.949743  522590 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:49:04.949820  522590 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:49:04.949928  522590 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:49:04.950016  522590 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:49:04.950100  522590 kubeadm.go:310] OS: Linux
	I0916 23:49:04.950168  522590 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:49:04.950250  522590 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:49:04.950311  522590 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:49:04.950355  522590 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:49:04.950436  522590 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:49:04.950511  522590 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:49:04.950590  522590 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:49:04.950659  522590 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:49:04.950779  522590 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:49:04.950896  522590 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:49:04.950988  522590 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:49:04.951039  522590 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:49:04.953148  522590 out.go:252]   - Generating certificates and keys ...
	I0916 23:49:04.953253  522590 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:49:04.953350  522590 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:49:04.953473  522590 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:49:04.953544  522590 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:49:04.953598  522590 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:49:04.953656  522590 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:49:04.953723  522590 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:49:04.953871  522590 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.953944  522590 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:49:04.954104  522590 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.954204  522590 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:49:04.954308  522590 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:49:04.954373  522590 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:49:04.954472  522590 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:49:04.954529  522590 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:49:04.954641  522590 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:49:04.954719  522590 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:49:04.954827  522590 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:49:04.954889  522590 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:49:04.954961  522590 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:49:04.955029  522590 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:49:04.956667  522590 out.go:252]   - Booting up control plane ...
	I0916 23:49:04.956807  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:49:04.956925  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:49:04.956985  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:49:04.957219  522590 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:49:04.957368  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:49:04.957516  522590 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:49:04.957633  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:49:04.957703  522590 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:49:04.957908  522590 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:49:04.958044  522590 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:49:04.958151  522590 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.203651ms
	I0916 23:49:04.958278  522590 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:49:04.958374  522590 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:49:04.958531  522590 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:49:04.958637  522590 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:49:04.958758  522590 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.870805967s
	I0916 23:49:04.958876  522590 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.059203573s
	I0916 23:49:04.958980  522590 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002212231s
	I0916 23:49:04.959143  522590 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:49:04.959322  522590 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:49:04.959464  522590 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:49:04.959729  522590 kubeadm.go:310] [mark-control-plane] Marking the node addons-069011 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:49:04.959828  522590 kubeadm.go:310] [bootstrap-token] Using token: hth27u.vwd374r3m591cy8w
	I0916 23:49:04.961508  522590 out.go:252]   - Configuring RBAC rules ...
	I0916 23:49:04.961663  522590 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:49:04.961761  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:49:04.961918  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:49:04.962103  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:49:04.962249  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:49:04.962324  522590 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:49:04.962449  522590 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:49:04.962510  522590 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:49:04.962584  522590 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:49:04.962595  522590 kubeadm.go:310] 
	I0916 23:49:04.962677  522590 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:49:04.962687  522590 kubeadm.go:310] 
	I0916 23:49:04.962800  522590 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:49:04.962816  522590 kubeadm.go:310] 
	I0916 23:49:04.962858  522590 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:49:04.962957  522590 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:49:04.963031  522590 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:49:04.963041  522590 kubeadm.go:310] 
	I0916 23:49:04.963139  522590 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:49:04.963150  522590 kubeadm.go:310] 
	I0916 23:49:04.963217  522590 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:49:04.963226  522590 kubeadm.go:310] 
	I0916 23:49:04.963305  522590 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:49:04.963432  522590 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:49:04.963527  522590 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:49:04.963541  522590 kubeadm.go:310] 
	I0916 23:49:04.963668  522590 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:49:04.963778  522590 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:49:04.963792  522590 kubeadm.go:310] 
	I0916 23:49:04.963908  522590 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964060  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0916 23:49:04.964108  522590 kubeadm.go:310] 	--control-plane 
	I0916 23:49:04.964118  522590 kubeadm.go:310] 
	I0916 23:49:04.964224  522590 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:49:04.964234  522590 kubeadm.go:310] 
	I0916 23:49:04.964354  522590 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964531  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0916 23:49:04.964546  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:49:04.964565  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:49:04.966440  522590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:49:04.968135  522590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:49:04.972876  522590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:49:04.972901  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:49:04.992864  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:49:05.238639  522590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:49:05.238825  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.238851  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-069011 minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-069011 minikube.k8s.io/primary=true
	I0916 23:49:05.248222  522590 ops.go:34] apiserver oom_adj: -16
	I0916 23:49:05.324340  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.825316  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.324537  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.824724  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.325050  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.824729  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.325083  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.824525  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.324551  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.825331  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.895926  522590 kubeadm.go:1105] duration metric: took 4.65716259s to wait for elevateKubeSystemPrivileges
	I0916 23:49:09.895964  522590 kubeadm.go:394] duration metric: took 14.891511977s to StartCluster
	I0916 23:49:09.895989  522590 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896108  522590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:49:09.896612  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896807  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:49:09.896820  522590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:49:09.896883  522590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 23:49:09.897046  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.897061  522590 addons.go:69] Setting volcano=true in profile "addons-069011"
	I0916 23:49:09.897068  522590 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897082  522590 addons.go:238] Setting addon volcano=true in "addons-069011"
	I0916 23:49:09.897052  522590 addons.go:69] Setting yakd=true in profile "addons-069011"
	I0916 23:49:09.897090  522590 addons.go:69] Setting registry-creds=true in profile "addons-069011"
	I0916 23:49:09.897102  522590 addons.go:238] Setting addon yakd=true in "addons-069011"
	I0916 23:49:09.897112  522590 addons.go:238] Setting addon registry-creds=true in "addons-069011"
	I0916 23:49:09.897122  522590 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-069011"
	I0916 23:49:09.897128  522590 addons.go:69] Setting storage-provisioner=true in profile "addons-069011"
	I0916 23:49:09.897146  522590 addons.go:69] Setting volumesnapshots=true in profile "addons-069011"
	I0916 23:49:09.897161  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897169  522590 addons.go:69] Setting metrics-server=true in profile "addons-069011"
	I0916 23:49:09.897176  522590 addons.go:69] Setting cloud-spanner=true in profile "addons-069011"
	I0916 23:49:09.897178  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897047  522590 addons.go:69] Setting inspektor-gadget=true in profile "addons-069011"
	I0916 23:49:09.897165  522590 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897206  522590 addons.go:238] Setting addon cloud-spanner=true in "addons-069011"
	I0916 23:49:09.897216  522590 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-069011"
	I0916 23:49:09.897232  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897233  522590 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-069011"
	I0916 23:49:09.897264  522590 addons.go:238] Setting addon inspektor-gadget=true in "addons-069011"
	I0916 23:49:09.897181  522590 addons.go:238] Setting addon metrics-server=true in "addons-069011"
	I0916 23:49:09.897423  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897445  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897164  522590 addons.go:238] Setting addon volumesnapshots=true in "addons-069011"
	I0916 23:49:09.897586  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897092  522590 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-069011"
	I0916 23:49:09.897619  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-069011"
	I0916 23:49:09.897820  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897823  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897828  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897883  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897925  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897931  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.898010  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897153  522590 addons.go:238] Setting addon storage-provisioner=true in "addons-069011"
	I0916 23:49:09.898348  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897270  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897123  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.898989  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.899031  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897162  522590 addons.go:69] Setting registry=true in profile "addons-069011"
	I0916 23:49:09.899114  522590 addons.go:238] Setting addon registry=true in "addons-069011"
	I0916 23:49:09.899147  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897135  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897171  522590 addons.go:69] Setting default-storageclass=true in profile "addons-069011"
	I0916 23:49:09.899508  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-069011"
	I0916 23:49:09.897278  522590 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:09.899697  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897286  522590 addons.go:69] Setting ingress=true in profile "addons-069011"
	I0916 23:49:09.899882  522590 addons.go:238] Setting addon ingress=true in "addons-069011"
	I0916 23:49:09.899918  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897295  522590 addons.go:69] Setting gcp-auth=true in profile "addons-069011"
	I0916 23:49:09.899976  522590 mustload.go:65] Loading cluster: addons-069011
	I0916 23:49:09.897305  522590 addons.go:69] Setting ingress-dns=true in profile "addons-069011"
	I0916 23:49:09.900142  522590 addons.go:238] Setting addon ingress-dns=true in "addons-069011"
	I0916 23:49:09.900176  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.900346  522590 out.go:179] * Verifying Kubernetes components...
	I0916 23:49:09.902141  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:49:09.906029  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906489  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906586  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906921  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.907068  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909270  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909876  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.910613  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906032  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.966036  522590 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-069011"
	I0916 23:49:09.966110  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.966784  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	W0916 23:49:09.981981  522590 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 23:49:09.986930  522590 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0916 23:49:09.989771  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 23:49:09.989801  522590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 23:49:09.989878  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.990151  522590 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0916 23:49:09.991871  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 23:49:09.992484  522590 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0916 23:49:09.993934  522590 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:09.993954  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 23:49:09.994025  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.994418  522590 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:09.994431  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0916 23:49:09.994485  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.997452  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 23:49:09.997452  522590 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0916 23:49:10.001152  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 23:49:10.001192  522590 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.001229  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0916 23:49:10.001311  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.003359  522590 addons.go:238] Setting addon default-storageclass=true in "addons-069011"
	I0916 23:49:10.003429  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.003879  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:10.004609  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 23:49:10.006166  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 23:49:10.007322  522590 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0916 23:49:10.008643  522590 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 23:49:10.008663  522590 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.008684  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 23:49:10.008820  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 23:49:10.008829  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.010190  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 23:49:10.010220  522590 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 23:49:10.010294  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.012486  522590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:49:10.012564  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 23:49:10.014826  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.014910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:49:10.015167  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.016771  522590 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0916 23:49:10.018372  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.018418  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0916 23:49:10.018493  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 23:49:10.018494  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.019739  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 23:49:10.019764  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 23:49:10.019840  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.023104  522590 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0916 23:49:10.023240  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0916 23:49:10.024340  522590 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 23:49:10.024365  522590 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0916 23:49:10.024441  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.025784  522590 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0916 23:49:10.025900  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.027422  522590 out.go:179]   - Using image docker.io/registry:3.0.0
	I0916 23:49:10.029503  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.032360  522590 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 23:49:10.032382  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 23:49:10.032451  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.032643  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.037094  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.038113  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.038152  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 23:49:10.038221  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.058927  522590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.058950  522590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:49:10.059009  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.063705  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 23:49:10.066747  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 23:49:10.066781  522590 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 23:49:10.066937  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.067231  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.069660  522590 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 23:49:10.072852  522590 out.go:179]   - Using image docker.io/busybox:stable
	I0916 23:49:10.077706  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.077738  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 23:49:10.077812  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.081171  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099594  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099601  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.101679  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.103303  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.109277  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.113014  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:49:10.114406  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.114692  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:49:10.116962  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.132677  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.135654  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.137795  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.144377  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.149192  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.245816  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 23:49:10.245838  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 23:49:10.253803  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:10.256108  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.265944  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:10.288794  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 23:49:10.288827  522590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 23:49:10.291276  522590 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.291301  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0916 23:49:10.298027  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.301761  522590 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 23:49:10.301815  522590 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 23:49:10.303881  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 23:49:10.303906  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 23:49:10.307619  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.321011  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.321513  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 23:49:10.321533  522590 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 23:49:10.335228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.342628  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.353105  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.360830  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 23:49:10.360864  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 23:49:10.366097  522590 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.366124  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 23:49:10.368966  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.368997  522590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 23:49:10.374870  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 23:49:10.374897  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 23:49:10.383228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.419473  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 23:49:10.419505  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 23:49:10.420148  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 23:49:10.420173  522590 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 23:49:10.431466  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 23:49:10.431495  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 23:49:10.431508  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.447520  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.491601  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 23:49:10.491635  522590 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 23:49:10.495666  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 23:49:10.495699  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 23:49:10.522266  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 23:49:10.522304  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 23:49:10.608119  522590 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:49:10.610081  522590 node_ready.go:35] waiting up to 6m0s for node "addons-069011" to be "Ready" ...
	I0916 23:49:10.613978  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.614095  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 23:49:10.619888  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 23:49:10.619918  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 23:49:10.636272  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 23:49:10.636303  522590 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 23:49:10.689230  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.705272  522590 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.705297  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 23:49:10.708368  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 23:49:10.708557  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 23:49:10.788275  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 23:49:10.788306  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 23:49:10.806501  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.869607  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 23:49:10.869632  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 23:49:10.937889  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 23:49:10.937914  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 23:49:11.002071  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.002102  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 23:49:11.047895  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.130142  522590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-069011" context rescaled to 1 replicas
	I0916 23:49:11.643350  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.290178117s)
	I0916 23:49:11.643439  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.30078278s)
	I0916 23:49:11.643452  522590 addons.go:479] Verifying addon ingress=true in "addons-069011"
	I0916 23:49:11.643582  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.212051777s)
	I0916 23:49:11.643613  522590 addons.go:479] Verifying addon registry=true in "addons-069011"
	I0916 23:49:11.643522  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.260251451s)
	I0916 23:49:11.643722  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.196160875s)
	W0916 23:49:11.643735  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.643740  522590 addons.go:479] Verifying addon metrics-server=true in "addons-069011"
	I0916 23:49:11.643761  522590 retry.go:31] will retry after 298.602868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.646501  522590 out.go:179] * Verifying registry addon...
	I0916 23:49:11.646501  522590 out.go:179] * Verifying ingress addon...
	I0916 23:49:11.646504  522590 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-069011 service yakd-dashboard -n yakd-dashboard
	
	I0916 23:49:11.652191  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 23:49:11.652206  522590 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 23:49:11.655147  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:11.655173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:11.655271  522590 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 23:49:11.655299  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:11.943533  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:12.143203  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.336408881s)
	W0916 23:49:12.143280  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143297  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095362374s)
	I0916 23:49:12.143318  522590 retry.go:31] will retry after 271.042655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143322  522590 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:12.145833  522590 out.go:179] * Verifying csi-hostpath-driver addon...
	I0916 23:49:12.148236  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 23:49:12.153014  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:12.153041  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.157053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.157321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.415287  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0916 23:49:12.575627  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:12.575662  522590 retry.go:31] will retry after 298.950278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:12.614105  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:12.652906  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.655120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.655721  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.875699  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:13.152262  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.155946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.156155  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:13.653200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.655268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.655558  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.152741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.154674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.154869  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.651414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.654802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.929904  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51454475s)
	I0916 23:49:14.929925  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.05417803s)
	W0916 23:49:14.929968  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:14.929993  522590 retry.go:31] will retry after 724.402782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:15.113335  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:15.152058  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.155353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.155409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:15.651139  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.655103  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:15.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.152053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.155268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.155481  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:16.233482  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.233517  522590 retry.go:31] will retry after 528.645422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.654976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.763126  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:17.152861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.155374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:17.346237  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:17.346292  522590 retry.go:31] will retry after 1.241721728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:17.613291  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:17.637138  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 23:49:17.637240  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.651912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.655594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:17.659459  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.770859  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 23:49:17.790444  522590 addons.go:238] Setting addon gcp-auth=true in "addons-069011"
	I0916 23:49:17.790517  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:17.790880  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:17.810255  522590 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 23:49:17.810334  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.829504  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.924366  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:17.925772  522590 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0916 23:49:17.926989  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 23:49:17.927016  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 23:49:17.947928  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 23:49:17.947963  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 23:49:17.968887  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:17.968910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 23:49:17.988471  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:18.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.155501  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.155799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.360333  522590 addons.go:479] Verifying addon gcp-auth=true in "addons-069011"
	I0916 23:49:18.361695  522590 out.go:179] * Verifying gcp-auth addon...
	I0916 23:49:18.364169  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 23:49:18.367024  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 23:49:18.367044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:18.588324  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:18.652355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.654775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.655329  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.867741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:19.151755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.154903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.154930  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:19.161345  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.161383  522590 retry.go:31] will retry after 2.165570319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:19.614026  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:19.652152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.655765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.655827  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:19.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.151387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.154666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.154897  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.368600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.651411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.655000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.655011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.868027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.151730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.155244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.155464  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.327698  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:21.367411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.650905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.655659  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.655769  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.867968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:21.902069  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:21.902100  522590 retry.go:31] will retry after 1.920767743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:22.113269  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:22.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.154840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:22.651563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.655020  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.868412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.151599  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.155033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.155245  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.367616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.651422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.654714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.654854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.823078  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:23.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.113772  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:24.152012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.155306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.155536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.367843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.396574  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.396608  522590 retry.go:31] will retry after 5.249600328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.651892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.655528  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.868048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.152228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.155056  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.368598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.651661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.655231  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.655269  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.867507  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:26.151287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.155745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.155923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.368083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:26.612894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:26.652086  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.655500  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.867894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.151727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.155077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.368077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.652080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.655544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:28.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.155039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.155194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.367271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:28.613247  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:28.652605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.654553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.868444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.151120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.155325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.155404  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.367903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.646635  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:29.651947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.655369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.655591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.868090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.151994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:30.222879  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.222909  522590 retry.go:31] will retry after 6.679975361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.368039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.655141  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.655354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:30.867036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:31.112894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:31.151818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.155291  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.367578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:31.651196  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.655723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.655764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.867818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.152173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.155965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.156115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.367078  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.652733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.655287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.655347  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.867604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:33.113866  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:33.151850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.155462  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.367548  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:33.651173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.655487  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.655550  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.151692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.154752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.154822  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.367980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.652127  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.655730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.868271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:35.151839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.155765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.155925  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.368376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:35.613366  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:35.651791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.655929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.656002  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.868276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.152007  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.155379  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.367593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.652140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.655627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.655826  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.903759  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:37.152322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.155245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.367621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:37.484516  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:37.484552  522590 retry.go:31] will retry after 4.853736845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:37.613755  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:37.651588  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.654987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.655126  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.867377  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.154847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.155074  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.651724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.655174  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.867641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:39.151291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:39.613957  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:39.652056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.655324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.655427  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.155213  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.155515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.367629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.652268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.655504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.655716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.867786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.151908  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.155026  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.155219  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.367009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.652274  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.654993  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.868497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.113784  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:42.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.156178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.156253  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.339312  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:42.368085  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:42.653863  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.656534  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.656609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.867016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.931965  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:42.932013  522590 retry.go:31] will retry after 9.201032876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:43.151738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.155452  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.157165  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.367931  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:43.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.655792  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.655791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.868283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:44.151192  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.155952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.156077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.368187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:44.612897  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:44.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.655165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.867416  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.155365  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.155527  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.367088  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.652905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.655224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.655382  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.867470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:46.152562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.155553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.155698  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.367899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:46.613967  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:46.652183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.867721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.155062  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.155242  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.367292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.652156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.655812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.656147  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.867423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.152152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.155526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.155678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.367871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.651966  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.655104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.655456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:49.113864  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:49.151422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.154601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.154659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.368059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:49.651895  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.655081  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.655227  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.867193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.154433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.154532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.367752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.614048  522590 node_ready.go:49] node "addons-069011" is "Ready"
	I0916 23:49:50.614124  522590 node_ready.go:38] duration metric: took 40.004018622s for node "addons-069011" to be "Ready" ...
	I0916 23:49:50.614142  522590 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:49:50.614260  522590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:49:50.634002  522590 api_server.go:72] duration metric: took 40.737149121s to wait for apiserver process to appear ...
	I0916 23:49:50.634037  522590 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:49:50.634066  522590 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:49:50.639530  522590 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:49:50.640709  522590 api_server.go:141] control plane version: v1.34.0
	I0916 23:49:50.640743  522590 api_server.go:131] duration metric: took 6.69752ms to wait for apiserver health ...
	I0916 23:49:50.640754  522590 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:49:50.645035  522590 system_pods.go:59] 20 kube-system pods found
	I0916 23:49:50.645109  522590 system_pods.go:61] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.645119  522590 system_pods.go:61] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.645126  522590 system_pods.go:61] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.645131  522590 system_pods.go:61] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.645134  522590 system_pods.go:61] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.645138  522590 system_pods.go:61] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.645141  522590 system_pods.go:61] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.645146  522590 system_pods.go:61] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.645150  522590 system_pods.go:61] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.645156  522590 system_pods.go:61] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.645165  522590 system_pods.go:61] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.645171  522590 system_pods.go:61] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.645182  522590 system_pods.go:61] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.645192  522590 system_pods.go:61] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.645206  522590 system_pods.go:61] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.645211  522590 system_pods.go:61] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.645217  522590 system_pods.go:61] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.645222  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645231  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645238  522590 system_pods.go:61] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.645253  522590 system_pods.go:74] duration metric: took 4.491675ms to wait for pod list to return data ...
	I0916 23:49:50.645267  522590 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:49:50.649832  522590 default_sa.go:45] found service account: "default"
	I0916 23:49:50.649863  522590 default_sa.go:55] duration metric: took 4.587184ms for default service account to be created ...
	I0916 23:49:50.649876  522590 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:49:50.651240  522590 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:50.651263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.653416  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.653453  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.653463  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.653471  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.653478  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.653507  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.653517  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.653523  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.653531  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.653541  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.653553  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.653564  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.653570  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.653577  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.653586  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.653604  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.653610  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.653621  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.653630  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653641  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653649  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.653671  522590 retry.go:31] will retry after 286.454663ms: missing components: kube-dns
	I0916 23:49:50.654669  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:50.654689  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.655263  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.970963  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.971008  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.971021  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.971032  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:50.971040  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:50.971049  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:50.971060  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.971067  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.971075  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.971081  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.971093  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.971098  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.971107  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.971115  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.971127  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.971139  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.971149  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:50.971487  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.971519  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971529  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971537  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.971560  522590 retry.go:31] will retry after 250.710433ms: missing components: kube-dns
	I0916 23:49:51.152661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.154830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.154922  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.227146  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.227184  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.227191  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:51.227200  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.227206  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.227213  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.227219  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.227223  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.227226  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.227230  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.227235  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.227241  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.227244  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.227250  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.227256  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.227261  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.227265  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.227272  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.227277  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227286  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227292  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:51.227310  522590 retry.go:31] will retry after 293.334556ms: missing components: kube-dns
	I0916 23:49:51.368304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:51.526481  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.526535  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.526545  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Running
	I0916 23:49:51.526559  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.526572  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.526582  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.526589  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.526595  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.526601  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.526608  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.526618  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.526623  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.526629  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.526635  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.526645  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.526690  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.526699  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.526714  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.526722  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526731  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526737  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Running
	I0916 23:49:51.526755  522590 system_pods.go:126] duration metric: took 876.872082ms to wait for k8s-apps to be running ...
	I0916 23:49:51.526767  522590 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:49:51.526834  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:49:51.543571  522590 system_svc.go:56] duration metric: took 16.790922ms WaitForService to wait for kubelet
	I0916 23:49:51.543604  522590 kubeadm.go:578] duration metric: took 41.646760707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:49:51.543633  522590 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:49:51.546804  522590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:49:51.546832  522590 node_conditions.go:123] node cpu capacity is 8
	I0916 23:49:51.546851  522590 node_conditions.go:105] duration metric: took 3.210939ms to run NodePressure ...
	I0916 23:49:51.546866  522590 start.go:241] waiting for startup goroutines ...
	I0916 23:49:51.653201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.655460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.655502  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.867905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.133215  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:52.152421  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.155318  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:52.367901  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.651612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:52.780604  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.780644  522590 retry.go:31] will retry after 11.236841486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.867960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.152499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.155690  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.369120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.653294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.655366  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.867612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.152263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.154786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.154825  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.368535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.651809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.655532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.655654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.868318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.152216  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.154997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.155198  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.368885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.652607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.868072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.153735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.155961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.156369  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.367288  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.654554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.654654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.867827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.152232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.154799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.154814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.368344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.651690  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.655166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.655327  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.867912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.152149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.155593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.155720  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.367868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.654817  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.867989  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.154848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.154899  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.368414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.651849  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.655048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.655193  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.866961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.152429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.154913  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.154932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.367821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.652008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.655477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.867460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.152318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.155248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.155323  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.651746  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.655519  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.655601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.867766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.152212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.154600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.154831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.367336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.655315  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.655331  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.154749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.154818  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.651319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.655739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.655966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.868159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:04.018435  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:04.151970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.155986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.156204  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.367594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:04.598781  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.598815  522590 retry.go:31] will retry after 23.829016694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.655382  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.867585  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.151943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.367838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.652819  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.868265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.151902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.155278  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.367335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.651933  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.655376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.867544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.151927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.155463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.155566  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.367946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.652554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.655150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.655250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.867104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.154867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.154932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.367820  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.652108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.655674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.867488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.151318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.155660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.155771  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.368018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.652352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.867979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.154744  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.367888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.652342  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.868023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.152284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.154741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.154823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.368224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.651602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.654730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.655430  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.152453  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.155233  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.367898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.652236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.654831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.654839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.868375  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.151282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.155678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.155786  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.368346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.652132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.655641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.655658  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.867735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.152048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.155624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.367645  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.651952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.655433  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.867300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.151804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.155275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.155321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.367103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.651754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.655590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.655740  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.868629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.155556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.155585  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.367279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.651583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.655042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.867499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.151753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.154889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.368258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.651448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.655920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.655988  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.868165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.155157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.368301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.654851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.655022  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.868093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.154885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.154951  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.368636  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.651987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.655509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.655549  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.867433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.154985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.155048  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.368109  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.651638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.654894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.654923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.867870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.155357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.155505  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.368035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.652897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.656101  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.656100  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.152943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.155198  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.367576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.655870  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.867990  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.152723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.155609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.155624  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.653531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.655283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.867298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.151888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.155832  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.155956  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.373346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.652179  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.655942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.656079  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.867787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.152745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.156266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.156485  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.367952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.655819  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.867860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.153299  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.155510  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.155645  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.367671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.655448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.655652  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.867254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.151981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.156009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.156850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.367744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.654351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.656634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.656737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.868098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.153435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.156745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.156944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.367835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.428940  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:28.651949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.655492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.655714  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.866833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:29.128531  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.128569  522590 retry.go:31] will retry after 40.39789771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.154066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.156666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.156872  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.367799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:29.652238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.654780  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.655095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.867922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.152458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.155006  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.155093  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.367812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.652850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.867340  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.151917  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.155386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.155417  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.367531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.653268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.657791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.657831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.868270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.155469  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.157902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.158614  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.368334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.656126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.656171  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.155033  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.156187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.366965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.655162  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.655350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.868673  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.152675  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.155008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.155063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.368239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.655185  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.867899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.152626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.155359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.155446  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.367305  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.652378  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.655807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.868004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.152291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.155228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.155274  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.367904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.652666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.655056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.868245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.153660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.155936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.156021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.367947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.652965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.654916  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.654970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.867352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.152079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.155581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.155593  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.652943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.655717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.868640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.152316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.155082  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.155138  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.368233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.651993  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.654885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.655026  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.868217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.152059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.155525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.155590  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.367907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.152251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.154763  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.367545  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.654751  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.868012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.154862  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.154889  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.368681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.652243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.654497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.654707  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.152560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.156124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.156157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.367649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.652430  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.654968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.654986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.867477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.151715  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.154833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.154926  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.368003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.652097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.655411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.655482  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.151785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.155294  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.367710  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.652316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.654798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.654835  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.867771  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.151940  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.155638  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.367470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.652017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.655632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.655678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.152166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.155566  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.155778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.653210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.655490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.655647  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.867856  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.152084  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.155486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.155488  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.367425  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.654912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.151097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.155642  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.155716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.652527  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.654528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.654540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.867508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.152341  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.367631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.651795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.654967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.655191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.867951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.155228  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.368136  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.654278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.658434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.658602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.867554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.151825  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.154981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.155043  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.368227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.651587  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.654841  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.868253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.151568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.154906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.368332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.652244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.654695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.654772  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.867872  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.152199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.155137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.367783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.652699  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.654783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.654979  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.868132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.152259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.154768  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.367668  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.652881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.655002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.655049  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.868381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.151518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.367620  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.651888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.655083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.655175  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.868708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.152144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.155438  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.155487  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.367472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.652234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.654836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.654874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.867903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.152561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.154532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.154668  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.367739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.655541  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.867577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.155130  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.368654  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.652953  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.654943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.654982  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.868114  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.151581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.155143  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.368473  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.651816  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.655282  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.867147  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.151121  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.155456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.367218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.651621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.654783  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.152018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.155576  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.367896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.655222  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.655273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.867265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.151348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.156159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.156250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.367497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.652167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.655608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.655715  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.867725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.155471  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.155479  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.367579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.652472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.867055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.153048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.155508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.155556  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.367853  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.653083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.655046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.655090  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.867138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.152134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.155674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.367789  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.652335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.654809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.654932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.868697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.152531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.154911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.154955  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.370805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.652428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.654916  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.868557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.151860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.155090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.155145  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.367368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.651698  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.654852  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.868069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.151519  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.154937  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.154942  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.368515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.526750  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:51:09.652541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.655572  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.655659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.868054  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:51:10.098163  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:51:10.098324  522590 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0916 23:51:10.152880  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.154875  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.367834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:10.652251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.655021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.655084  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.867384  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.151842  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.155150  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.368186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.652269  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.655256  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.867128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.152667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.155107  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.652518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.654870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.867312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.151982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.155271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.155332  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.367823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.652387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.654951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.868844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.153334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.155643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.155904  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.368482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.652515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.655724  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.152601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.155604  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.652539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.655836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.655906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.868440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.151573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.154807  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.368168  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.652042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.655560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.655747  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.151965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.155140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.155210  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.368464  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.652037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.655823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.867935  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.152022  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.155517  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.367482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.651927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.654865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.655024  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.868282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.151370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.155878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.155924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.651943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.868827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.151845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.155066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.155072  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.369339  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.651811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.654774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.654963  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.867983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.152276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.154893  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.367794  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.652538  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.654934  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.654939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.867898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.151949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.155295  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.155445  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.367407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.651590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.655019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.867887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.152190  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.155502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.155545  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.367753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.652562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.654651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.152073  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.155610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.367957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.868057  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.152408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.155409  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.155602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.652052  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.655209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.655312  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.151535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.155823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.155856  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.651651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.654990  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.867537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.152091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.155112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.155142  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.654137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.656355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.656515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.869096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.154581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.154673  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.367987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.652294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.654753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.654853  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.869651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.154807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.154850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.368887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.654241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.655196  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.151919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.155232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.155296  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.367463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.867385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.151552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.154871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.154947  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.369090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.652787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.654631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.869965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.152268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.154797  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.154858  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.368137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.654729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.654778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.868357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.151932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.155182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.155339  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.367560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.651975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.655413  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.867981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.152479  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.155002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.155059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.368688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.651549  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.655000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.655063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.868189  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.151809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.155205  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.155350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.367322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.651627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.752333  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.752426  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.868016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.155466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.368191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.654883  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.868252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.152153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.155806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.155969  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.368131  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.652021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.655754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.655968  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.869697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.152009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.155144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.155151  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.369995  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.652185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.655536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.655553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.867639  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.151740  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.154964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.155029  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.368608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.651802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.654961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.869716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.152077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.155323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.155354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.367481  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.651750  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.655154  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.867047  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.152227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.154790  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.154936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.367727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.655578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.655618  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.869685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.152239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.154748  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.367986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.654796  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.868157  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.151984  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.155093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.155268  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.367574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.652278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.867108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.151635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.155169  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.367632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.656348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.656416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.867492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.155082  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.368046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.652581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.655278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.655440  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.867304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.151985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.155139  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.367275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.652201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.654659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.654708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.867813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.152102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.368132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.652347  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.654903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.654929  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.868615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.151762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.154894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.155015  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.367728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.652716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.655105  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.655114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.867844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.151899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.367647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.651960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.655182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.867701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.152323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.368036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.652752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.655140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.867998  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.152002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.155125  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.155152  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.652049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.655522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.655726  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.868294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.151791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.155565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.367865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.652161  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.655672  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.868579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.151650  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.154924  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.155034  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.369092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.651132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.655513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.655522  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.868691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.152450  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.155354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.155524  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.367600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.651882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.655373  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.655408  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.867056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.152214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.154682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.154691  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.367828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.652289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.654838  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.654919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.868482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.155680  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.367605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.652000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.655628  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.867754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.152556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.155095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.367975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.654741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.868401  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.153486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.155941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.156005  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.652886  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.654744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.867833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.152068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.155056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.368282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.651560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.654879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.654906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.868124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.151834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.368228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.654864  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.655039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.152355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.155216  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.155250  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.367206  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.651490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.655688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.655736  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.868528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.152001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.155683  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.367787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.652284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.654849  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.868355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.151870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.155589  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.369165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.655412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.655514  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.867952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.152595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.154738  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.154768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.368177  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.651492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.654766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.654890  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.867847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.155407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.155591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.367682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.652426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.655066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.655077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.868692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.151879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.154999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.368983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.652433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.655105  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.867405  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.151744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.651596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.654914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.655059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.868458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.152215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.154616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.154655  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.367845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.652783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.655112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.655120  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.151544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.155208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.155226  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.367504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.652199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.152537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.155961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.155972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.652499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.655560  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.153765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.156270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.156301  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.367137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.652938  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.655212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.655254  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.867526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.152762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.155539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.155611  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.367745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.653490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.655575  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.867930  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.152233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.154692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.154928  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.368718  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.655076  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.868860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.152353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.154742  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.367623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.655140  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.867455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.151851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.155247  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.367164  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.652193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.655452  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.655496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.867913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.152181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.155667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.155764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.368289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.651762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.654913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.654985  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.868273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.152523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.155730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.156762  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.369278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.653153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.656847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.656957  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.872367  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.152950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.155208  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.368554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.652083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.656110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.656132  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.867845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.152657  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.155336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.155360  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.367646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.652603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.655013  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.655062  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.868632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.151907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.155327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.155416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.367287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.651614  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.654920  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.867932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.155722  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.367894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.652307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.654756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.654995  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.869050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.151999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.155129  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.155241  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.367234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.655801  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.867063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.152370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.368226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.651514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.654966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.654979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.867379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.152074  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.155478  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.155627  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.367613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.651861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.655241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.655314  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.867408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.151695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.155047  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.368563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.655425  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.867208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.151957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.156991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.157177  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.367383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.651982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.655413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.655465  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.867368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.151925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.154970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.155019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.368160  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.651611  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.654847  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.654859  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.867942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.152874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.154630  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.154694  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.368049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.651257  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.655624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.655667  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.867801  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.152524  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.156020  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.156108  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.651663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.655003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.655207  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.867344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.152248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.154952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.155114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.368836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.652345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.868484  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.151558  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.154855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.154863  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.368442  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.651568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.655180  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.868266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.151815  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.155240  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.367272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.651711  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.655134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.655194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.867490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.151598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.155259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.367609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.651854  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.655324  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.867858  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.153080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.155098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.155341  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.367674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.651945  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.655335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.655353  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.151897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.155637  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.155683  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.367456  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.652090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.655528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.655648  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.152606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.154994  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.368455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.652303  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.655073  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.867363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.151724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.155569  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.367351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.651839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.655606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.868338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.152142  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.155217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.155532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.368358  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.651898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.655540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.655567  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.868334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.154861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.154907  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.368768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.652068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.655573  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.869959  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.152619  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.154675  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.367925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.868289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.152483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.154991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.155032  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.368359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.651646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.655296  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.867137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.152187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.155835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.155854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.367912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.652016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.655327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.867319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.151608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.154828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.155016  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.368488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.653811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.656445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.656565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.867120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.152791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.154576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.154723  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.367602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.651437  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.655676  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.867828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.152180  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.155737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.155763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.367992  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.652246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.654603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.868092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.152800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.154702  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.154910  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.367595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.654693  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.867547  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.151877  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.155211  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.367273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.651756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.655367  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.867318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.151786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.155034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.155115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.651521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.655726  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.655766  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.868163  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.151496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.155243  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.366955  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.652531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.655173  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.655184  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.867097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.152201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.155505  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.155636  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.367562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.651843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.655301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.655384  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.868028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.152914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.155252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.155462  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.367149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.651713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.655354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.655450  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.867440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.151891  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.368461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.901721  522590 kapi.go:107] duration metric: took 3m34.537544348s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 23:52:52.906543  522590 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-069011 cluster.
	I0916 23:52:52.912324  522590 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 23:52:52.913737  522590 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 23:52:53.153197  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:53.155666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.652828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.655014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.655110  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.152324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.155476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.155496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.655581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.655609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.152128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.155885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.156039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.652641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.654978  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.152674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.154874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.155000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.652035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.655457  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.655496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:57.152186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.155542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:57.155561  522590 kapi.go:107] duration metric: took 3m45.503354476s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 23:52:57.652350  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.655498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.154850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.652665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.152543  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.154283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.653277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.659941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.152852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.154649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.652327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.654800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.154525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.651817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.655138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.653502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.656037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.151857  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.155055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.652334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.152174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.155870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.653124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.153568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.155625  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.653230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.655236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.152361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.154928  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.653059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.656200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.152336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.652346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.655712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.653610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.152628  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.154934  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.655144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.154348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.155986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.652369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.152148  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.155670  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.652553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.655243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.152796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.155106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.651747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.655634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.153010  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.155374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.654738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.656482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.152952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.652523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.152364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.155721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.655954  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.656795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.152967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.154926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.653027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.153039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.653034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.156123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.651828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.151648  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.652222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.654551  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.655101  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.651672  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.655009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.152329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.652063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.655272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.152182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.155422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.652218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.654560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.152574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.155253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.652502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.151663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.155115  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.655044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.152383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.155509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.652354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.654747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.169011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.169001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.653424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.655714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.152979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.254144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.651804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.655470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.151827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.155108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.652422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.152193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.155976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.652210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.654980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.151709  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.155038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.651589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.655050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.151868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.155145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.652363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.655892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.151643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.154810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.653583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.655279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.153153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.155522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.652584  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.151580  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.156561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.652732  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.655133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.155361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.158601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.652275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.654674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.153755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.155714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.652926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.151466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.154733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.653313  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.655745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.152234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.155638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.652445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.654541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.152461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.652312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.654686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.155170  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.651644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.152309  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.154360  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.654550  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.151904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.154960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.652091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.655542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.151570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.652708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.654522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.151593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.154608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.651922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.151376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.155482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.151782  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.154824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.652429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.152137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.154936  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.651792  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.654929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.152207  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.652077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.655059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.152055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.155283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.654677  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.152004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.154803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.653046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.654923  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.154978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.651950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.654986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.151595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.154725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.652661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.654540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.155079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.652239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.654476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.151772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.155226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.655124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.151415  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.155604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.652777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.152275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.155829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.653025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.152978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.154716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.652635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.152070  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.155270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.652577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.655424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.152756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.154426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.651964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.655181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.151369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.155561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.651593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.654586  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.152252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.654423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.152030  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.155167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.651855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.654881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.151556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.654500  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.152255  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.154344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.652483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.655325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.151729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.154664  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.652904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.654681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.152267  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.652291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.151577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.154865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.654618  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.152302  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.154688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.653092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.654963  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.151758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.154735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.652999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.154498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.654909  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.151298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.155557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.652643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.654491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.152751  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.652126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.655183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.151763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.155046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.152658  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.154758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.652985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.655060  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.151705  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.154775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.652773  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.654589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.152592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.155097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.651889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.152217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.652903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.152686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.154506  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.652260  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.654251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.154777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.652915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.152381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.155278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.651555  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.152695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.652919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.151929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.155096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.652215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.654600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.152243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.154806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.655336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.151915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.154836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.152467  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.653379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.655466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.151800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.155291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.653102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.153140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.654838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.153210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.155329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.152491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.154729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.653037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.654741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.152830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.154474  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.652230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.654509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.151920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.154827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.653191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.655219  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.151306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.155960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.651717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.655110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.152304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.154575  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.652514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.654778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.652961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.654330  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.655691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.152418  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.154851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.651435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.654582  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.153087  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.155042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.654583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.152997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.154432  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.652600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.152066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.651875  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.655064  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.152238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.154411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.655370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.152256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.154799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.652896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.655256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.152778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.154615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.652772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.654597  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.152798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.155091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.652248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.654728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.152282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.154468  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.652120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.655482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.151671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.653242  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.654823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.152812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.152839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.155119  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.652214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.654840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.152996  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.155254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.651623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.153897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.155803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.652443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.654867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.152374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.154640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.653033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.654888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.152649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.154604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.652521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.654615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.152209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.154579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.652590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.654414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.651951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.655307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.151878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.651739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.654805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.152326  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.154364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.654812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.152821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.154939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.651434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.152103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.155132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.655072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.154539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.155149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.654796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.151638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.154787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.652885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.152069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.652069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.655407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.152172  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.156173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.652301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.654808  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.153293  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.155684  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.652844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.654749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.652609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.151757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.652511  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.654688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.152258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.154829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.653049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.151579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.154591  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.652331  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.654994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.151784  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.154921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.655067  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.151900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.155072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.651978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.655300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.151961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.154914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.654644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.152090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.155188  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.652025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.652821  522590 kapi.go:107] duration metric: took 6m0.000625805s to wait for kubernetes.io/minikube-addons=registry ...
	W0916 23:55:11.652991  522590 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0916 23:55:12.148606  522590 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0916 23:55:12.148655  522590 kapi.go:107] duration metric: took 6m0.000415083s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0916 23:55:12.148771  522590 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0916 23:55:12.151062  522590 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, volumesnapshots, gcp-auth, ingress
	I0916 23:55:12.152575  522590 addons.go:514] duration metric: took 6m2.25568849s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher cloud-spanner metrics-server yakd volumesnapshots gcp-auth ingress]
	I0916 23:55:12.152638  522590 start.go:246] waiting for cluster config update ...
	I0916 23:55:12.152661  522590 start.go:255] writing updated cluster config ...
	I0916 23:55:12.152955  522590 ssh_runner.go:195] Run: rm -f paused
	I0916 23:55:12.157549  522590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:12.161141  522590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.165703  522590 pod_ready.go:94] pod "coredns-66bc5c9577-m872b" is "Ready"
	I0916 23:55:12.165731  522590 pod_ready.go:86] duration metric: took 4.567019ms for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.168067  522590 pod_ready.go:83] waiting for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.172550  522590 pod_ready.go:94] pod "etcd-addons-069011" is "Ready"
	I0916 23:55:12.172583  522590 pod_ready.go:86] duration metric: took 4.489308ms for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.174872  522590 pod_ready.go:83] waiting for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.179401  522590 pod_ready.go:94] pod "kube-apiserver-addons-069011" is "Ready"
	I0916 23:55:12.179432  522590 pod_ready.go:86] duration metric: took 4.532992ms for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.181473  522590 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.561817  522590 pod_ready.go:94] pod "kube-controller-manager-addons-069011" is "Ready"
	I0916 23:55:12.561846  522590 pod_ready.go:86] duration metric: took 380.349392ms for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.763149  522590 pod_ready.go:83] waiting for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.161850  522590 pod_ready.go:94] pod "kube-proxy-v85kq" is "Ready"
	I0916 23:55:13.161880  522590 pod_ready.go:86] duration metric: took 398.696904ms for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.362802  522590 pod_ready.go:83] waiting for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761895  522590 pod_ready.go:94] pod "kube-scheduler-addons-069011" is "Ready"
	I0916 23:55:13.761929  522590 pod_ready.go:86] duration metric: took 399.094008ms for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761944  522590 pod_ready.go:40] duration metric: took 1.604356273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:13.810173  522590 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:55:13.812279  522590 out.go:179] * Done! kubectl is now configured to use "addons-069011" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:07:17 addons-069011 crio[933]: time="2025-09-17 00:07:17.110143241Z" level=info msg="Deleting pod kube-system_snapshot-controller-7d9fbc56b8-s7m82 from CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:07:17 addons-069011 crio[933]: time="2025-09-17 00:07:17.126935301Z" level=info msg="Stopped pod sandbox: 7daa29e729a88a818e2e4a7c27e210711f43626fbc7ffbe3a6d028331ceb2c5b" id=f79885db-5d86-4ebe-80b0-7dceeeb12d26 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 00:07:17 addons-069011 crio[933]: time="2025-09-17 00:07:17.135508716Z" level=info msg="Stopped pod sandbox: 4be25aad82a4e3088e29cbb88ec81ad3fe9f12514c16cc70d4d786a95f291d85" id=9fa04921-7f63-4185-aef7-317f03007fbf name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 17 00:07:17 addons-069011 crio[933]: time="2025-09-17 00:07:17.319335779Z" level=info msg="Removing container: af48fae595f24bb6555e6bb2de83831ceaa5c1bfee64086df54b89df858a86b1" id=0d75215b-1ec9-403e-b751-bc9f25234711 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 17 00:07:17 addons-069011 crio[933]: time="2025-09-17 00:07:17.339806545Z" level=info msg="Removed container af48fae595f24bb6555e6bb2de83831ceaa5c1bfee64086df54b89df858a86b1: kube-system/snapshot-controller-7d9fbc56b8-st98r/volume-snapshot-controller" id=0d75215b-1ec9-403e-b751-bc9f25234711 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 17 00:07:17 addons-069011 crio[933]: time="2025-09-17 00:07:17.342134212Z" level=info msg="Removing container: 3c653d4c50b5c7937f8e28055d8e8d90139dad1bbe5c7469eb7140519f4a61ca" id=dab0aeae-bf75-41a6-b366-9d1fd009ae06 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 17 00:07:17 addons-069011 crio[933]: time="2025-09-17 00:07:17.363246597Z" level=info msg="Removed container 3c653d4c50b5c7937f8e28055d8e8d90139dad1bbe5c7469eb7140519f4a61ca: kube-system/snapshot-controller-7d9fbc56b8-s7m82/volume-snapshot-controller" id=dab0aeae-bf75-41a6-b366-9d1fd009ae06 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 17 00:07:19 addons-069011 crio[933]: time="2025-09-17 00:07:19.174542142Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=d66604b4-6d9c-4c70-8eeb-311c587b2402 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:19 addons-069011 crio[933]: time="2025-09-17 00:07:19.174905304Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=d66604b4-6d9c-4c70-8eeb-311c587b2402 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:27 addons-069011 crio[933]: time="2025-09-17 00:07:27.174114587Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cc57d6b7-2f2d-4f98-804c-48b59881c219 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:27 addons-069011 crio[933]: time="2025-09-17 00:07:27.174455319Z" level=info msg="Image docker.io/nginx:alpine not found" id=cc57d6b7-2f2d-4f98-804c-48b59881c219 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:30 addons-069011 crio[933]: time="2025-09-17 00:07:30.174326385Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=56bf1c05-c528-4975-9318-3211a6591066 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:30 addons-069011 crio[933]: time="2025-09-17 00:07:30.174611479Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=56bf1c05-c528-4975-9318-3211a6591066 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:39 addons-069011 crio[933]: time="2025-09-17 00:07:39.174438676Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cab6f766-2f58-41f4-8c42-2e8a949f8075 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:39 addons-069011 crio[933]: time="2025-09-17 00:07:39.174659129Z" level=info msg="Image docker.io/nginx:alpine not found" id=cab6f766-2f58-41f4-8c42-2e8a949f8075 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:39 addons-069011 crio[933]: time="2025-09-17 00:07:39.378574309Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=aebb79eb-2209-4177-8c11-5cbcf8c8805e name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:39 addons-069011 crio[933]: time="2025-09-17 00:07:39.378821419Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=aebb79eb-2209-4177-8c11-5cbcf8c8805e name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:44 addons-069011 crio[933]: time="2025-09-17 00:07:44.174830506Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=700aa7ed-7a5d-4be0-aac4-3deb25b57266 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:44 addons-069011 crio[933]: time="2025-09-17 00:07:44.175164452Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=700aa7ed-7a5d-4be0-aac4-3deb25b57266 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:50 addons-069011 crio[933]: time="2025-09-17 00:07:50.174162650Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c8b7ee46-0685-4854-8a41-7eaedafaa788 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:50 addons-069011 crio[933]: time="2025-09-17 00:07:50.174363155Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=03165641-9b97-4f6e-bc0f-5c8ca8b279a0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:50 addons-069011 crio[933]: time="2025-09-17 00:07:50.174663284Z" level=info msg="Image docker.io/nginx:alpine not found" id=c8b7ee46-0685-4854-8a41-7eaedafaa788 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:50 addons-069011 crio[933]: time="2025-09-17 00:07:50.174667608Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=03165641-9b97-4f6e-bc0f-5c8ca8b279a0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:07:50 addons-069011 crio[933]: time="2025-09-17 00:07:50.175234212Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=ad35701e-3abd-4a00-bb37-a88599780a59 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:07:50 addons-069011 crio[933]: time="2025-09-17 00:07:50.179020233Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8fc15d8cb7dd5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	295b9edc02db1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          10 minutes ago      Running             csi-provisioner                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	3bebfc3ce5f89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          11 minutes ago      Running             busybox                                  0                   b34e9dc849123       busybox
	0994d530b2186       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            11 minutes ago      Running             liveness-probe                           0                   e614fc1047195       csi-hostpathplugin-s98vb
	d78ede218b3d9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           13 minutes ago      Running             hostpath                                 0                   e614fc1047195       csi-hostpathplugin-s98vb
	16a4495ac9a55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                14 minutes ago      Running             node-driver-registrar                    0                   e614fc1047195       csi-hostpathplugin-s98vb
	cb0aaa55cf5e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            15 minutes ago      Running             gadget                                   0                   38b62a86f7523       gadget-g862x
	75b35093f1f14       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              16 minutes ago      Running             registry-proxy                           0                   f2e835ff4c172       registry-proxy-gtpv9
	fce1ccd8d33b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   16 minutes ago      Running             csi-external-health-monitor-controller   0                   e614fc1047195       csi-hostpathplugin-s98vb
	0957eacca23bd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              17 minutes ago      Running             csi-resizer                              0                   b8131d2ee78de       csi-hostpath-resizer-0
	ad4a09c21105c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             17 minutes ago      Running             csi-attacher                             0                   15f9a9c33b53e       csi-hostpath-attacher-0
	c1b11b9e2fae1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             18 minutes ago      Running             local-path-provisioner                   0                   be69758a594c2       local-path-provisioner-648f6765c9-4qs6g
	7d0db99be084d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             18 minutes ago      Running             storage-provisioner                      0                   e26878809420e       storage-provisioner
	b62ac7b1e2d93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             18 minutes ago      Running             coredns                                  0                   90cd65a058e3e       coredns-66bc5c9577-m872b
	81f4db589dfd0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             18 minutes ago      Running             kindnet-cni                              0                   282dceccf27e4       kindnet-hn7tx
	8204c89cdc90d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             18 minutes ago      Running             kube-proxy                               0                   076ce47b67764       kube-proxy-v85kq
	d1d2d3ef1a2d6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             18 minutes ago      Running             kube-controller-manager                  0                   2befa508c819b       kube-controller-manager-addons-069011
	f4991aa96dbe9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             18 minutes ago      Running             kube-apiserver                           0                   24f1de8dafedd       kube-apiserver-addons-069011
	ecbc264153ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             18 minutes ago      Running             kube-scheduler                           0                   3af000cb5a57c       kube-scheduler-addons-069011
	5a81076e6d9a8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             18 minutes ago      Running             etcd                                     0                   f590790ed13d4       etcd-addons-069011
	
	
	==> coredns [b62ac7b1e2d935063ca8c0594642886e49ad0423507f04d148e7bd385ca935ce] <==
	[INFO] 10.244.0.16:41844 - 24408 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.005351648s
	[INFO] 10.244.0.16:41844 - 61260 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.00010766s
	[INFO] 10.244.0.16:41844 - 57101 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000097835s
	[INFO] 10.244.0.16:41844 - 46087 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000078007s
	[INFO] 10.244.0.16:41844 - 60029 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000129618s
	[INFO] 10.244.0.16:41844 - 19002 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000074384s
	[INFO] 10.244.0.16:41844 - 22590 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000064099s
	[INFO] 10.244.0.16:41844 - 54810 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145556s
	[INFO] 10.244.0.16:41844 - 63594 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000157733s
	[INFO] 10.244.0.16:43401 - 40991 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000202993s
	[INFO] 10.244.0.16:43401 - 23256 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000118813s
	[INFO] 10.244.0.16:43401 - 25879 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000151544s
	[INFO] 10.244.0.16:43401 - 44125 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000172841s
	[INFO] 10.244.0.16:43401 - 45377 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000128146s
	[INFO] 10.244.0.16:43401 - 3915 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000138824s
	[INFO] 10.244.0.16:43401 - 27047 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.003576529s
	[INFO] 10.244.0.16:43401 - 18740 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00550307s
	[INFO] 10.244.0.16:43401 - 37407 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000079822s
	[INFO] 10.244.0.16:43401 - 15754 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000093748s
	[INFO] 10.244.0.16:43401 - 38112 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000102868s
	[INFO] 10.244.0.16:43401 - 38559 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000159161s
	[INFO] 10.244.0.16:43401 - 38515 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000086627s
	[INFO] 10.244.0.16:43401 - 40240 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000112642s
	[INFO] 10.244.0.16:43401 - 10581 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000116974s
	[INFO] 10.244.0.16:43401 - 23239 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000173138s
	
	
	==> describe nodes <==
	Name:               addons-069011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-069011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-069011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-069011
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-069011"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:49:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-069011
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:07:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:03:10 +0000   Tue, 16 Sep 2025 23:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-069011
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e6a06e1e17043f19f3b8f5ea0927359
	  System UUID:                fa23b867-4022-409a-8baa-bf981ffedafe
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  gadget                      gadget-g862x                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-m872b                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 csi-hostpathplugin-s98vb                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-addons-069011                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-hn7tx                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-addons-069011                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-069011                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-v85kq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-069011                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-66898fdd98-bl4r5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-proxy-gtpv9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  local-path-storage          helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  local-path-storage          local-path-provisioner-648f6765c9-4qs6g                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node addons-069011 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node addons-069011 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node addons-069011 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node addons-069011 event: Registered Node addons-069011 in Controller
	  Normal  NodeReady                18m   kubelet          Node addons-069011 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [5a81076e6d9a8c9983866e09b1190810cd0059c34edeae1a479f9d18f3003a91] <==
	{"level":"warn","ts":"2025-09-16T23:49:01.021210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.034514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.041663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.048524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.061680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.068240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.075225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.081757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.105206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.111554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.154896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.666348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.673196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.575058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.581784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.598000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.605378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:59:00.630787Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1449}
	{"level":"info","ts":"2025-09-16T23:59:00.656834Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1449,"took":"25.282457ms","hash":3232880921,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3645440,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-09-16T23:59:00.656898Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3232880921,"revision":1449,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:04:00.635503Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2177}
	{"level":"info","ts":"2025-09-17T00:04:00.654518Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2177,"took":"18.415625ms","hash":2584493315,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3166208,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-09-17T00:04:00.654575Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2584493315,"revision":2177,"compact-revision":1449}
	
	
	==> kernel <==
	 00:07:52 up  2:50,  0 users,  load average: 0.17, 1.57, 21.36
	Linux addons-069011 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [81f4db589dfd0f8f014a7fc056f2d7f752ecc52737aea10ae2f8a98d0242428b] <==
	I0917 00:05:50.191858       1 main.go:301] handling current node
	I0917 00:06:00.189633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:00.189679       1 main.go:301] handling current node
	I0917 00:06:10.189374       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:10.189444       1 main.go:301] handling current node
	I0917 00:06:20.190624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:20.190663       1 main.go:301] handling current node
	I0917 00:06:30.189792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:30.189835       1 main.go:301] handling current node
	I0917 00:06:40.186534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:40.186588       1 main.go:301] handling current node
	I0917 00:06:50.191534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:06:50.191568       1 main.go:301] handling current node
	I0917 00:07:00.183932       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:00.183965       1 main.go:301] handling current node
	I0917 00:07:10.187158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:10.187200       1 main.go:301] handling current node
	I0917 00:07:20.185225       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:20.185269       1 main.go:301] handling current node
	I0917 00:07:30.186602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:30.186649       1 main.go:301] handling current node
	I0917 00:07:40.184573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:40.184613       1 main.go:301] handling current node
	I0917 00:07:50.185328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:07:50.185479       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4991aa96dbe98af7f934784cdc7973d5aabec72325938f0e98ad8efde3d06e3] <==
	I0917 00:00:03.548840       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:10.960424       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:15.531695       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:28.446522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:31.841808       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:34.885369       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:39.392704       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:37.349511       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:53.960048       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:05:53.849137       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:06:16.946188       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:09.469741       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:07:16.845355       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:07:16.845449       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:07:16.864384       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:07:16.865052       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:07:16.865231       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:07:16.878677       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:07:16.878726       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:07:16.915745       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:07:16.915797       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 00:07:17.865852       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 00:07:17.916734       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 00:07:17.933874       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0917 00:07:28.820684       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [d1d2d3ef1a2d61d604d7b7b71875c31a98127791ebbcaaae9e7c5dcebb1fd036] <==
	E0917 00:07:19.305091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:21.327675       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:21.328651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:21.934887       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:21.935929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:22.152873       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:22.154255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:24.757135       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:24.758384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:26.267574       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:26.268873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:28.191533       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:28.192522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:32.345290       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:32.346480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:35.264348       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:35.265438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:07:38.831222       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:38.832186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0917 00:07:38.844940       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0917 00:07:38.844978       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:07:38.865449       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0917 00:07:38.865507       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:07:45.990023       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:07:45.991067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [8204c89cdc90d58370aa745a3053c12e5b976409a1e0bedddf9508ac3e770c1f] <==
	I0916 23:49:09.803647       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:49:09.874911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:49:09.984976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:49:09.985628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:49:09.986296       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:49:10.154642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:49:10.159433       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:49:10.183201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:49:10.195463       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:49:10.195513       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:49:10.199563       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:49:10.199664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:49:10.200188       1 config.go:309] "Starting node config controller"
	I0916 23:49:10.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:49:10.200334       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:49:10.200369       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:49:10.200991       1 config.go:200] "Starting service config controller"
	I0916 23:49:10.201078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:49:10.299859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:49:10.300474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:49:10.300501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:49:10.302086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ecbc264153ff2a219390febac6665f8efc1a49ab24db502b79ba6888e6bd5b71] <==
	E0916 23:49:01.591306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0916 23:49:01.591979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:01.591995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:01.592038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:01.592032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0916 23:49:01.592058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:01.592081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0916 23:49:01.592128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:01.592273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:01.592272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:49:01.592315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0916 23:49:02.478666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0916 23:49:02.478742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0916 23:49:02.495998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:02.533597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:02.645572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:02.658831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0916 23:49:02.700650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:02.730028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:02.731014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0916 23:49:02.807698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:49:02.811032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:02.813063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0916 23:49:02.832467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0916 23:49:05.387364       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:07:17 addons-069011 kubelet[1557]: I0917 00:07:17.369596    1557 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcrvr\" (UniqueName: \"kubernetes.io/projected/100900c8-3969-4728-9976-e2aa3a810064-kube-api-access-fcrvr\") on node \"addons-069011\" DevicePath \"\""
	Sep 17 00:07:18 addons-069011 kubelet[1557]: I0917 00:07:18.176271    1557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100900c8-3969-4728-9976-e2aa3a810064" path="/var/lib/kubelet/pods/100900c8-3969-4728-9976-e2aa3a810064/volumes"
	Sep 17 00:07:18 addons-069011 kubelet[1557]: I0917 00:07:18.176782    1557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcc527a-ffe8-4b57-a90c-e0ab34894d2c" path="/var/lib/kubelet/pods/3bcc527a-ffe8-4b57-a90c-e0ab34894d2c/volumes"
	Sep 17 00:07:19 addons-069011 kubelet[1557]: E0917 00:07:19.175297    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:07:20 addons-069011 kubelet[1557]: E0917 00:07:20.174220    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:07:24 addons-069011 kubelet[1557]: E0917 00:07:24.421813    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067644421594641  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:24 addons-069011 kubelet[1557]: E0917 00:07:24.421845    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067644421594641  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:27 addons-069011 kubelet[1557]: E0917 00:07:27.174795    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:07:30 addons-069011 kubelet[1557]: E0917 00:07:30.174908    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:07:31 addons-069011 kubelet[1557]: E0917 00:07:31.174250    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:07:33 addons-069011 kubelet[1557]: I0917 00:07:33.173971    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gtpv9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:07:34 addons-069011 kubelet[1557]: E0917 00:07:34.424471    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067654424124513  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:34 addons-069011 kubelet[1557]: E0917 00:07:34.424521    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067654424124513  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:36 addons-069011 kubelet[1557]: I0917 00:07:36.174131    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:07:39 addons-069011 kubelet[1557]: E0917 00:07:39.175006    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:07:39 addons-069011 kubelet[1557]: E0917 00:07:39.365678    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 00:07:39 addons-069011 kubelet[1557]: E0917 00:07:39.365743    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 00:07:39 addons-069011 kubelet[1557]: E0917 00:07:39.365837    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb_local-path-storage(de6c504b-6eb1-4731-8d69-f050d70230ed): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:07:39 addons-069011 kubelet[1557]: E0917 00:07:39.365870    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" podUID="de6c504b-6eb1-4731-8d69-f050d70230ed"
	Sep 17 00:07:39 addons-069011 kubelet[1557]: E0917 00:07:39.379207    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" podUID="de6c504b-6eb1-4731-8d69-f050d70230ed"
	Sep 17 00:07:42 addons-069011 kubelet[1557]: E0917 00:07:42.174595    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:07:44 addons-069011 kubelet[1557]: E0917 00:07:44.175508    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:07:44 addons-069011 kubelet[1557]: E0917 00:07:44.426245    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067664425973013  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:44 addons-069011 kubelet[1557]: E0917 00:07:44.426281    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067664425973013  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:07:50 addons-069011 kubelet[1557]: E0917 00:07:50.174960    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	
	
	==> storage-provisioner [7d0db99be084d7a7996f085af51ba0b4b9263d1a30c5ba98cac79995b3641b35] <==
	W0917 00:07:27.716468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:29.722085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:29.727506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:31.730983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:31.736474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:33.740191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:33.744791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:35.748723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:35.753467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:37.757146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:37.762490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:39.765927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:39.770133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:41.773992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:41.779530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:43.782942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:43.787110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:45.791279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:45.795482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:47.799492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:47.804887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:49.808258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:49.812719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:51.816459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:07:51.821001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
helpers_test.go:269: (dbg) Run:  kubectl --context addons-069011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1 (91.801965ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Tue, 16 Sep 2025 23:56:47 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kksmh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kksmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/nginx to addons-069011
	  Normal   Pulling    3m14s (x5 over 11m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     104s (x5 over 9m20s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     104s (x5 over 9m20s)  kubelet            Error: ErrImagePull
	  Warning  Failed     26s (x16 over 9m20s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x18 over 9m20s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:01:13 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfz5d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-rfz5d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m40s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-069011
	  Normal   Pulling    112s (x4 over 6m40s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     44s (x4 over 5m15s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     44s (x4 over 5m15s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x8 over 5m15s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     11s (x8 over 5m15s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s54zg (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-s54zg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-66898fdd98-bl4r5" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.797094321s)
--- FAIL: TestAddons/parallel/LocalPath (345.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (128.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-pl9vq" [948400a2-9e11-40dd-af78-237e95b937e2] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-09-17 00:00:49.185292448 +0000 UTC m=+756.652636017
addons_test.go:1047: (dbg) Run:  kubectl --context addons-069011 describe po yakd-dashboard-5ff678cb9-pl9vq -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-069011 describe po yakd-dashboard-5ff678cb9-pl9vq -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-pl9vq
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-069011/192.168.49.2
Start Time:       Tue, 16 Sep 2025 23:49:50 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-pl9vq (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-455tc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-455tc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                  From               Message
----     ------            ----                 ----               -------
Warning  FailedScheduling  11m                  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         10m                  default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-pl9vq to addons-069011
Normal   Pulling           2m23s (x5 over 10m)  kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed            72s (x5 over 10m)    kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            72s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff           8s (x14 over 10m)    kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed            8s (x14 over 10m)    kubelet            Error: ImagePullBackOff
addons_test.go:1047: (dbg) Run:  kubectl --context addons-069011 logs yakd-dashboard-5ff678cb9-pl9vq -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-069011 logs yakd-dashboard-5ff678cb9-pl9vq -n yakd-dashboard: exit status 1 (73.429586ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-pl9vq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-069011 logs yakd-dashboard-5ff678cb9-pl9vq -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Yakd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-069011
helpers_test.go:243: (dbg) docker inspect addons-069011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	        "Created": "2025-09-16T23:48:50.029636255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:48:50.075029861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hosts",
	        "LogPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1-json.log",
	        "Name": "/addons-069011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-069011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-069011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	                "LowerDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-069011",
	                "Source": "/var/lib/docker/volumes/addons-069011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-069011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-069011",
	                "name.minikube.sigs.k8s.io": "addons-069011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7ea0b62281ff8981f73b140342aff58601fbb663df7278dfdd6743a41abcca5",
	            "SandboxKey": "/var/run/docker/netns/f7ea0b62281f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-069011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:4c:3e:1e:87:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d62ec0fa3bfb3ffd62859a508f03996c549db14f34473599ddd1b9022067b7b9",
	                    "EndpointID": "f8f4fe858390c8f96bc24eec26736fad3a3b1ba30f09e93e016a6a79e947f7af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-069011",
	                        "678205c9d470"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-069011 -n addons-069011
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 logs -n 25: (1.420683636s)
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-997829 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ -o=json --download-only -p download-only-515641 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p download-docker-660125 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p download-docker-660125                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p binary-mirror-785971 --alsologtostderr --binary-mirror http://127.0.0.1:38515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p binary-mirror-785971                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ addons  │ enable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ start   │ -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:55 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ enable headlamp -p addons-069011 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                           │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:57 UTC │
	│ addons  │ addons-069011 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:27.723751  522590 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:27.723864  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.723869  522590 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:27.723873  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.724066  522590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0916 23:48:27.724618  522590 out.go:368] Setting JSON to false
	I0916 23:48:27.725494  522590 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9051,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:27.725585  522590 start.go:140] virtualization: kvm guest
	I0916 23:48:27.728073  522590 out.go:179] * [addons-069011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:27.729850  522590 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:48:27.729868  522590 notify.go:220] Checking for updates...
	I0916 23:48:27.733822  522590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:27.736141  522590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:27.738039  522590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:27.740423  522590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:48:27.743368  522590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:48:27.746574  522590 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:27.771724  522590 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:27.771874  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.829971  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.818365984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.830249  522590 docker.go:318] overlay module found
	I0916 23:48:27.832946  522590 out.go:179] * Using the docker driver based on user configuration
	I0916 23:48:27.834751  522590 start.go:304] selected driver: docker
	I0916 23:48:27.834826  522590 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:27.834849  522590 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:48:27.835571  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.897913  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.886229333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.898100  522590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:27.898315  522590 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:48:27.900183  522590 out.go:179] * Using Docker driver with root privileges
	I0916 23:48:27.901481  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:27.901597  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:27.901613  522590 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:48:27.901710  522590 start.go:348] cluster config:
	{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0916 23:48:27.903324  522590 out.go:179] * Starting "addons-069011" primary control-plane node in "addons-069011" cluster
	I0916 23:48:27.904623  522590 cache.go:123] Beginning downloading kic base image for docker with crio
	I0916 23:48:27.905841  522590 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:48:27.907270  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:27.907330  522590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:48:27.907328  522590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:27.907354  522590 cache.go:58] Caching tarball of preloaded images
	I0916 23:48:27.907495  522590 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 23:48:27.907513  522590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:48:27.907895  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:27.907924  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json: {Name:mk15dc7feab5fd17bb004b2e5f6ac3bc55ac0d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:27.925199  522590 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:27.925352  522590 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:48:27.925371  522590 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:48:27.925375  522590 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:48:27.925383  522590 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:48:27.925403  522590 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0916 23:48:40.932191  522590 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0916 23:48:40.932224  522590 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:48:40.932259  522590 start.go:360] acquireMachinesLock for addons-069011: {Name:mk9387b718f452cc25627a84d4c20b7f46084ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:48:40.932371  522590 start.go:364] duration metric: took 90.542µs to acquireMachinesLock for "addons-069011"
	I0916 23:48:40.932411  522590 start.go:93] Provisioning new machine with config: &{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:48:40.932527  522590 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:48:40.934531  522590 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0916 23:48:40.934774  522590 start.go:159] libmachine.API.Create for "addons-069011" (driver="docker")
	I0916 23:48:40.934810  522590 client.go:168] LocalClient.Create starting
	I0916 23:48:40.934920  522590 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0916 23:48:41.819608  522590 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0916 23:48:42.094971  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:48:42.113173  522590 cli_runner.go:211] docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:48:42.113240  522590 network_create.go:284] running [docker network inspect addons-069011] to gather additional debugging logs...
	I0916 23:48:42.113258  522590 cli_runner.go:164] Run: docker network inspect addons-069011
	W0916 23:48:42.130815  522590 cli_runner.go:211] docker network inspect addons-069011 returned with exit code 1
	I0916 23:48:42.130846  522590 network_create.go:287] error running [docker network inspect addons-069011]: docker network inspect addons-069011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-069011 not found
	I0916 23:48:42.130884  522590 network_create.go:289] output of [docker network inspect addons-069011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-069011 not found
	
	** /stderr **
	I0916 23:48:42.130990  522590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:42.149832  522590 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002180220}
	I0916 23:48:42.149931  522590 network_create.go:124] attempt to create docker network addons-069011 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:48:42.150036  522590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-069011 addons-069011
	I0916 23:48:42.212157  522590 network_create.go:108] docker network addons-069011 192.168.49.0/24 created
	I0916 23:48:42.212194  522590 kic.go:121] calculated static IP "192.168.49.2" for the "addons-069011" container
	I0916 23:48:42.212312  522590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:48:42.229867  522590 cli_runner.go:164] Run: docker volume create addons-069011 --label name.minikube.sigs.k8s.io=addons-069011 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:48:42.252846  522590 oci.go:103] Successfully created a docker volume addons-069011
	I0916 23:48:42.252968  522590 cli_runner.go:164] Run: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:48:45.649491  522590 cli_runner.go:217] Completed: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (3.39647838s)
	I0916 23:48:45.649523  522590 oci.go:107] Successfully prepared a docker volume addons-069011
	I0916 23:48:45.649558  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:45.649589  522590 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:48:45.649695  522590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:48:49.956300  522590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.306552681s)
	I0916 23:48:49.956343  522590 kic.go:203] duration metric: took 4.306749088s to extract preloaded images to volume ...
	W0916 23:48:49.956477  522590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:48:49.956523  522590 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:48:49.956572  522590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:48:50.013382  522590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-069011 --name addons-069011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-069011 --network addons-069011 --ip 192.168.49.2 --volume addons-069011:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:48:50.304600  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Running}}
	I0916 23:48:50.323420  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.342386  522590 cli_runner.go:164] Run: docker exec addons-069011 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:48:50.402276  522590 oci.go:144] the created container "addons-069011" has a running status.
	I0916 23:48:50.402326  522590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa...
	I0916 23:48:50.521235  522590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:48:50.553384  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.579068  522590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:48:50.579099  522590 kic_runner.go:114] Args: [docker exec --privileged addons-069011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:48:50.638566  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.659803  522590 machine.go:93] provisionDockerMachine start ...
	I0916 23:48:50.660411  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.680019  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.680310  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.680332  522590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:48:50.820950  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.820990  522590 ubuntu.go:182] provisioning hostname "addons-069011"
	I0916 23:48:50.821063  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.841195  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.841673  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.841710  522590 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-069011 && echo "addons-069011" | sudo tee /etc/hostname
	I0916 23:48:50.996855  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.996967  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.016407  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.016637  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.016655  522590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-069011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-069011/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-069011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:48:51.154270  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:48:51.154311  522590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0916 23:48:51.154380  522590 ubuntu.go:190] setting up certificates
	I0916 23:48:51.154420  522590 provision.go:84] configureAuth start
	I0916 23:48:51.154487  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:51.173820  522590 provision.go:143] copyHostCerts
	I0916 23:48:51.173904  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0916 23:48:51.174069  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0916 23:48:51.174140  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0916 23:48:51.174195  522590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.addons-069011 san=[127.0.0.1 192.168.49.2 addons-069011 localhost minikube]
	I0916 23:48:51.417777  522590 provision.go:177] copyRemoteCerts
	I0916 23:48:51.417839  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:48:51.417897  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.435902  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:51.535686  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 23:48:51.563321  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:48:51.590971  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:48:51.617420  522590 provision.go:87] duration metric: took 462.978002ms to configureAuth
	I0916 23:48:51.617461  522590 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:48:51.617668  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:48:51.617795  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.638144  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.638409  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.638436  522590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 23:48:51.891077  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 23:48:51.891114  522590 machine.go:96] duration metric: took 1.230812219s to provisionDockerMachine
	I0916 23:48:51.891125  522590 client.go:171] duration metric: took 10.956309615s to LocalClient.Create
	I0916 23:48:51.891146  522590 start.go:167] duration metric: took 10.956377105s to libmachine.API.Create "addons-069011"
	I0916 23:48:51.891155  522590 start.go:293] postStartSetup for "addons-069011" (driver="docker")
	I0916 23:48:51.891170  522590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:48:51.891245  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:48:51.891288  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.909900  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.010593  522590 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:48:52.014317  522590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:48:52.014357  522590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:48:52.014366  522590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:48:52.014375  522590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:48:52.014406  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0916 23:48:52.014479  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0916 23:48:52.014515  522590 start.go:296] duration metric: took 123.348567ms for postStartSetup
	I0916 23:48:52.014852  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.034024  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:52.034357  522590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:48:52.034430  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.053383  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.147697  522590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:48:52.152300  522590 start.go:128] duration metric: took 11.219755748s to createHost
	I0916 23:48:52.152322  522590 start.go:83] releasing machines lock for "addons-069011", held for 11.219940729s
	I0916 23:48:52.152383  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.170897  522590 ssh_runner.go:195] Run: cat /version.json
	I0916 23:48:52.170959  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.170960  522590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:48:52.171033  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.190054  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.190316  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.282770  522590 ssh_runner.go:195] Run: systemctl --version
	I0916 23:48:52.358127  522590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 23:48:52.500662  522590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:48:52.505640  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.530299  522590 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:48:52.530413  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.562277  522590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:48:52.562302  522590 start.go:495] detecting cgroup driver to use...
	I0916 23:48:52.562333  522590 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:48:52.562405  522590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:48:52.578904  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:48:52.592493  522590 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:48:52.592567  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:48:52.607812  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:48:52.623718  522590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:48:52.695401  522590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:48:52.772869  522590 docker.go:234] disabling docker service ...
	I0916 23:48:52.772931  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:48:52.793499  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:48:52.806446  522590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:48:52.880604  522590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:48:52.994666  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:48:53.008181  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:48:53.026581  522590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0916 23:48:53.026648  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.040463  522590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0916 23:48:53.040546  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.052415  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.063700  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.074445  522590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:48:53.085081  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.097098  522590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.114871  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.125827  522590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:48:53.135170  522590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:48:53.145546  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.253634  522590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 23:48:53.356442  522590 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 23:48:53.356540  522590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 23:48:53.360459  522590 start.go:563] Will wait 60s for crictl version
	I0916 23:48:53.360526  522590 ssh_runner.go:195] Run: which crictl
	I0916 23:48:53.364103  522590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:48:53.402094  522590 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 23:48:53.402233  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.441123  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.481919  522590 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0916 23:48:53.483462  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:53.502054  522590 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:48:53.506129  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.518646  522590 kubeadm.go:875] updating cluster {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:48:53.518762  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:53.518816  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.590933  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.590961  522590 crio.go:433] Images already preloaded, skipping extraction
	I0916 23:48:53.591020  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.627023  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.627057  522590 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:48:53.627066  522590 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0916 23:48:53.627155  522590 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-069011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:48:53.627228  522590 ssh_runner.go:195] Run: crio config
	I0916 23:48:53.674869  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:53.674893  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:53.674906  522590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:48:53.674926  522590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-069011 NodeName:addons-069011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:48:53.675093  522590 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-069011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:48:53.675157  522590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:48:53.685496  522590 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:48:53.685568  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 23:48:53.695890  522590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 23:48:53.715420  522590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:48:53.738183  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:48:53.758975  522590 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 23:48:53.763002  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.775153  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.837066  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:48:53.861100  522590 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011 for IP: 192.168.49.2
	I0916 23:48:53.861120  522590 certs.go:194] generating shared ca certs ...
	I0916 23:48:53.861145  522590 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:53.861308  522590 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0916 23:48:54.155814  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt ...
	I0916 23:48:54.155846  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt: {Name:mk009b1713fd08c38e8c6ac054b69276424ded29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156071  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key ...
	I0916 23:48:54.156093  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key: {Name:mk39b68875de7851b17692da85e287f48166d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156213  522590 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0916 23:48:54.291541  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt ...
	I0916 23:48:54.291579  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt: {Name:mk94baf5fb1a8134bb0c9a9f3d32b751fe0bf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291793  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key ...
	I0916 23:48:54.291817  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key: {Name:mk06b3e70f919971eec12f66023f6279f2a9059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291928  522590 certs.go:256] generating profile certs ...
	I0916 23:48:54.292014  522590 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key
	I0916 23:48:54.292060  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt with IP's: []
	I0916 23:48:54.529110  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt ...
	I0916 23:48:54.529147  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: {Name:mk9156e00306316f93255eae42ecd81bb5d60b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529374  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key ...
	I0916 23:48:54.529406  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key: {Name:mk15bd78effcf8815d5571a84284c31db31b997e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529525  522590 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd
	I0916 23:48:54.529556  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 23:48:54.601370  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd ...
	I0916 23:48:54.601415  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd: {Name:mkb42f86b810cddd05c27083cd910769800b1942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.602548  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd ...
	I0916 23:48:54.602578  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd: {Name:mkf41ec91a0589b4d908c830ee946e4604a6886c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.603343  522590 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt
	I0916 23:48:54.603493  522590 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key
	I0916 23:48:54.603577  522590 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key
	I0916 23:48:54.603602  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt with IP's: []
	I0916 23:48:54.685718  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt ...
	I0916 23:48:54.685751  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt: {Name:mk4c4f7fbd326f3d00c11caa86441b715a5844e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.686777  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key ...
	I0916 23:48:54.686809  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key: {Name:mkde64e1b9ef5bdc16ad6f2b11b391d65f689b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.687062  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:48:54.687107  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0916 23:48:54.687130  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:48:54.687161  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0916 23:48:54.687932  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:48:54.717259  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:48:54.744669  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:48:54.771438  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 23:48:54.799454  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 23:48:54.826220  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:48:54.853243  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:48:54.878912  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 23:48:54.905711  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:48:54.935757  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:48:54.956698  522590 ssh_runner.go:195] Run: openssl version
	I0916 23:48:54.962817  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:48:54.976805  522590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.980979  522590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.981051  522590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.988637  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:48:55.000379  522590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:48:55.004385  522590 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:48:55.004456  522590 kubeadm.go:392] StartCluster: {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:48:55.004547  522590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 23:48:55.004599  522590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:48:55.043443  522590 cri.go:89] found id: ""
	I0916 23:48:55.043525  522590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:48:55.053975  522590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:48:55.064119  522590 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:48:55.064186  522590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:48:55.074381  522590 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:48:55.074421  522590 kubeadm.go:157] found existing configuration files:
	
	I0916 23:48:55.074469  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:48:55.084667  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:48:55.084749  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:48:55.095859  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:48:55.106006  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:48:55.106068  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:48:55.115485  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.124880  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:48:55.124952  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.134292  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:48:55.144662  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:48:55.144725  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:48:55.154111  522590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:48:55.211692  522590 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:48:55.271378  522590 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:49:04.949743  522590 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:49:04.949820  522590 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:49:04.949928  522590 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:49:04.950016  522590 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:49:04.950100  522590 kubeadm.go:310] OS: Linux
	I0916 23:49:04.950168  522590 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:49:04.950250  522590 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:49:04.950311  522590 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:49:04.950355  522590 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:49:04.950436  522590 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:49:04.950511  522590 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:49:04.950590  522590 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:49:04.950659  522590 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:49:04.950779  522590 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:49:04.950896  522590 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:49:04.950988  522590 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:49:04.951039  522590 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:49:04.953148  522590 out.go:252]   - Generating certificates and keys ...
	I0916 23:49:04.953253  522590 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:49:04.953350  522590 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:49:04.953473  522590 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:49:04.953544  522590 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:49:04.953598  522590 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:49:04.953656  522590 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:49:04.953723  522590 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:49:04.953871  522590 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.953944  522590 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:49:04.954104  522590 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.954204  522590 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:49:04.954308  522590 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:49:04.954373  522590 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:49:04.954472  522590 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:49:04.954529  522590 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:49:04.954641  522590 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:49:04.954719  522590 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:49:04.954827  522590 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:49:04.954889  522590 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:49:04.954961  522590 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:49:04.955029  522590 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:49:04.956667  522590 out.go:252]   - Booting up control plane ...
	I0916 23:49:04.956807  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:49:04.956925  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:49:04.956985  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:49:04.957219  522590 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:49:04.957368  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:49:04.957516  522590 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:49:04.957633  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:49:04.957703  522590 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:49:04.957908  522590 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:49:04.958044  522590 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:49:04.958151  522590 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.203651ms
	I0916 23:49:04.958278  522590 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:49:04.958374  522590 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:49:04.958531  522590 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:49:04.958637  522590 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:49:04.958758  522590 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.870805967s
	I0916 23:49:04.958876  522590 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.059203573s
	I0916 23:49:04.958980  522590 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002212231s
	I0916 23:49:04.959143  522590 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:49:04.959322  522590 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:49:04.959464  522590 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:49:04.959729  522590 kubeadm.go:310] [mark-control-plane] Marking the node addons-069011 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:49:04.959828  522590 kubeadm.go:310] [bootstrap-token] Using token: hth27u.vwd374r3m591cy8w
	I0916 23:49:04.961508  522590 out.go:252]   - Configuring RBAC rules ...
	I0916 23:49:04.961663  522590 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:49:04.961761  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:49:04.961918  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:49:04.962103  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:49:04.962249  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:49:04.962324  522590 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:49:04.962449  522590 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:49:04.962510  522590 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:49:04.962584  522590 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:49:04.962595  522590 kubeadm.go:310] 
	I0916 23:49:04.962677  522590 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:49:04.962687  522590 kubeadm.go:310] 
	I0916 23:49:04.962800  522590 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:49:04.962816  522590 kubeadm.go:310] 
	I0916 23:49:04.962858  522590 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:49:04.962957  522590 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:49:04.963031  522590 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:49:04.963041  522590 kubeadm.go:310] 
	I0916 23:49:04.963139  522590 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:49:04.963150  522590 kubeadm.go:310] 
	I0916 23:49:04.963217  522590 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:49:04.963226  522590 kubeadm.go:310] 
	I0916 23:49:04.963305  522590 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:49:04.963432  522590 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:49:04.963527  522590 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:49:04.963541  522590 kubeadm.go:310] 
	I0916 23:49:04.963668  522590 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:49:04.963778  522590 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:49:04.963792  522590 kubeadm.go:310] 
	I0916 23:49:04.963908  522590 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964060  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0916 23:49:04.964108  522590 kubeadm.go:310] 	--control-plane 
	I0916 23:49:04.964118  522590 kubeadm.go:310] 
	I0916 23:49:04.964224  522590 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:49:04.964234  522590 kubeadm.go:310] 
	I0916 23:49:04.964354  522590 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964531  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0916 23:49:04.964546  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:49:04.964565  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:49:04.966440  522590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:49:04.968135  522590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:49:04.972876  522590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:49:04.972901  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:49:04.992864  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:49:05.238639  522590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:49:05.238825  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.238851  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-069011 minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-069011 minikube.k8s.io/primary=true
	I0916 23:49:05.248222  522590 ops.go:34] apiserver oom_adj: -16
	I0916 23:49:05.324340  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.825316  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.324537  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.824724  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.325050  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.824729  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.325083  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.824525  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.324551  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.825331  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.895926  522590 kubeadm.go:1105] duration metric: took 4.65716259s to wait for elevateKubeSystemPrivileges
	I0916 23:49:09.895964  522590 kubeadm.go:394] duration metric: took 14.891511977s to StartCluster
	I0916 23:49:09.895989  522590 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896108  522590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:49:09.896612  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896807  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:49:09.896820  522590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:49:09.896883  522590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 23:49:09.897046  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.897061  522590 addons.go:69] Setting volcano=true in profile "addons-069011"
	I0916 23:49:09.897068  522590 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897082  522590 addons.go:238] Setting addon volcano=true in "addons-069011"
	I0916 23:49:09.897052  522590 addons.go:69] Setting yakd=true in profile "addons-069011"
	I0916 23:49:09.897090  522590 addons.go:69] Setting registry-creds=true in profile "addons-069011"
	I0916 23:49:09.897102  522590 addons.go:238] Setting addon yakd=true in "addons-069011"
	I0916 23:49:09.897112  522590 addons.go:238] Setting addon registry-creds=true in "addons-069011"
	I0916 23:49:09.897122  522590 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-069011"
	I0916 23:49:09.897128  522590 addons.go:69] Setting storage-provisioner=true in profile "addons-069011"
	I0916 23:49:09.897146  522590 addons.go:69] Setting volumesnapshots=true in profile "addons-069011"
	I0916 23:49:09.897161  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897169  522590 addons.go:69] Setting metrics-server=true in profile "addons-069011"
	I0916 23:49:09.897176  522590 addons.go:69] Setting cloud-spanner=true in profile "addons-069011"
	I0916 23:49:09.897178  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897047  522590 addons.go:69] Setting inspektor-gadget=true in profile "addons-069011"
	I0916 23:49:09.897165  522590 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897206  522590 addons.go:238] Setting addon cloud-spanner=true in "addons-069011"
	I0916 23:49:09.897216  522590 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-069011"
	I0916 23:49:09.897232  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897233  522590 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-069011"
	I0916 23:49:09.897264  522590 addons.go:238] Setting addon inspektor-gadget=true in "addons-069011"
	I0916 23:49:09.897181  522590 addons.go:238] Setting addon metrics-server=true in "addons-069011"
	I0916 23:49:09.897423  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897445  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897164  522590 addons.go:238] Setting addon volumesnapshots=true in "addons-069011"
	I0916 23:49:09.897586  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897092  522590 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-069011"
	I0916 23:49:09.897619  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-069011"
	I0916 23:49:09.897820  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897823  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897828  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897883  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897925  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897931  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.898010  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897153  522590 addons.go:238] Setting addon storage-provisioner=true in "addons-069011"
	I0916 23:49:09.898348  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897270  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897123  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.898989  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.899031  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897162  522590 addons.go:69] Setting registry=true in profile "addons-069011"
	I0916 23:49:09.899114  522590 addons.go:238] Setting addon registry=true in "addons-069011"
	I0916 23:49:09.899147  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897135  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897171  522590 addons.go:69] Setting default-storageclass=true in profile "addons-069011"
	I0916 23:49:09.899508  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-069011"
	I0916 23:49:09.897278  522590 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:09.899697  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897286  522590 addons.go:69] Setting ingress=true in profile "addons-069011"
	I0916 23:49:09.899882  522590 addons.go:238] Setting addon ingress=true in "addons-069011"
	I0916 23:49:09.899918  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897295  522590 addons.go:69] Setting gcp-auth=true in profile "addons-069011"
	I0916 23:49:09.899976  522590 mustload.go:65] Loading cluster: addons-069011
	I0916 23:49:09.897305  522590 addons.go:69] Setting ingress-dns=true in profile "addons-069011"
	I0916 23:49:09.900142  522590 addons.go:238] Setting addon ingress-dns=true in "addons-069011"
	I0916 23:49:09.900176  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.900346  522590 out.go:179] * Verifying Kubernetes components...
	I0916 23:49:09.902141  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:49:09.906029  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906489  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906586  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906921  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.907068  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909270  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909876  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.910613  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906032  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.966036  522590 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-069011"
	I0916 23:49:09.966110  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.966784  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	W0916 23:49:09.981981  522590 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 23:49:09.986930  522590 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0916 23:49:09.989771  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 23:49:09.989801  522590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 23:49:09.989878  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.990151  522590 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0916 23:49:09.991871  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 23:49:09.992484  522590 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0916 23:49:09.993934  522590 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:09.993954  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 23:49:09.994025  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.994418  522590 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:09.994431  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0916 23:49:09.994485  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.997452  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 23:49:09.997452  522590 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0916 23:49:10.001152  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 23:49:10.001192  522590 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.001229  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0916 23:49:10.001311  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.003359  522590 addons.go:238] Setting addon default-storageclass=true in "addons-069011"
	I0916 23:49:10.003429  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.003879  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:10.004609  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 23:49:10.006166  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 23:49:10.007322  522590 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0916 23:49:10.008643  522590 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 23:49:10.008663  522590 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.008684  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 23:49:10.008820  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 23:49:10.008829  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.010190  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 23:49:10.010220  522590 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 23:49:10.010294  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.012486  522590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:49:10.012564  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 23:49:10.014826  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.014910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:49:10.015167  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.016771  522590 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0916 23:49:10.018372  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.018418  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0916 23:49:10.018493  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 23:49:10.018494  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.019739  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 23:49:10.019764  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 23:49:10.019840  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.023104  522590 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0916 23:49:10.023240  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0916 23:49:10.024340  522590 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 23:49:10.024365  522590 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0916 23:49:10.024441  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.025784  522590 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0916 23:49:10.025900  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.027422  522590 out.go:179]   - Using image docker.io/registry:3.0.0
	I0916 23:49:10.029503  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.032360  522590 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 23:49:10.032382  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 23:49:10.032451  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.032643  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.037094  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.038113  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.038152  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 23:49:10.038221  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.058927  522590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.058950  522590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:49:10.059009  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.063705  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 23:49:10.066747  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 23:49:10.066781  522590 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 23:49:10.066937  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.067231  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.069660  522590 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 23:49:10.072852  522590 out.go:179]   - Using image docker.io/busybox:stable
	I0916 23:49:10.077706  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.077738  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 23:49:10.077812  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.081171  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099594  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099601  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.101679  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.103303  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.109277  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.113014  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:49:10.114406  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.114692  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:49:10.116962  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.132677  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.135654  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.137795  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.144377  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.149192  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.245816  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 23:49:10.245838  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 23:49:10.253803  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:10.256108  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.265944  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:10.288794  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 23:49:10.288827  522590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 23:49:10.291276  522590 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.291301  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0916 23:49:10.298027  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.301761  522590 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 23:49:10.301815  522590 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 23:49:10.303881  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 23:49:10.303906  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 23:49:10.307619  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.321011  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.321513  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 23:49:10.321533  522590 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 23:49:10.335228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.342628  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.353105  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.360830  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 23:49:10.360864  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 23:49:10.366097  522590 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.366124  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 23:49:10.368966  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.368997  522590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 23:49:10.374870  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 23:49:10.374897  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 23:49:10.383228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.419473  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 23:49:10.419505  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 23:49:10.420148  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 23:49:10.420173  522590 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 23:49:10.431466  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 23:49:10.431495  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 23:49:10.431508  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.447520  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.491601  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 23:49:10.491635  522590 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 23:49:10.495666  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 23:49:10.495699  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 23:49:10.522266  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 23:49:10.522304  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 23:49:10.608119  522590 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:49:10.610081  522590 node_ready.go:35] waiting up to 6m0s for node "addons-069011" to be "Ready" ...
	I0916 23:49:10.613978  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.614095  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 23:49:10.619888  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 23:49:10.619918  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 23:49:10.636272  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 23:49:10.636303  522590 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 23:49:10.689230  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.705272  522590 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.705297  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 23:49:10.708368  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 23:49:10.708557  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 23:49:10.788275  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 23:49:10.788306  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 23:49:10.806501  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.869607  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 23:49:10.869632  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 23:49:10.937889  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 23:49:10.937914  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 23:49:11.002071  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.002102  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 23:49:11.047895  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.130142  522590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-069011" context rescaled to 1 replicas
	I0916 23:49:11.643350  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.290178117s)
	I0916 23:49:11.643439  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.30078278s)
	I0916 23:49:11.643452  522590 addons.go:479] Verifying addon ingress=true in "addons-069011"
	I0916 23:49:11.643582  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.212051777s)
	I0916 23:49:11.643613  522590 addons.go:479] Verifying addon registry=true in "addons-069011"
	I0916 23:49:11.643522  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.260251451s)
	I0916 23:49:11.643722  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.196160875s)
	W0916 23:49:11.643735  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.643740  522590 addons.go:479] Verifying addon metrics-server=true in "addons-069011"
	I0916 23:49:11.643761  522590 retry.go:31] will retry after 298.602868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.646501  522590 out.go:179] * Verifying registry addon...
	I0916 23:49:11.646501  522590 out.go:179] * Verifying ingress addon...
	I0916 23:49:11.646504  522590 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-069011 service yakd-dashboard -n yakd-dashboard
	
	I0916 23:49:11.652191  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 23:49:11.652206  522590 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 23:49:11.655147  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:11.655173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:11.655271  522590 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 23:49:11.655299  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:11.943533  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:12.143203  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.336408881s)
	W0916 23:49:12.143280  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143297  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095362374s)
	I0916 23:49:12.143318  522590 retry.go:31] will retry after 271.042655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143322  522590 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:12.145833  522590 out.go:179] * Verifying csi-hostpath-driver addon...
	I0916 23:49:12.148236  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 23:49:12.153014  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:12.153041  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.157053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.157321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.415287  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0916 23:49:12.575627  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:12.575662  522590 retry.go:31] will retry after 298.950278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:12.614105  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:12.652906  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.655120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.655721  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.875699  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:13.152262  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.155946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.156155  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:13.653200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.655268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.655558  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.152741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.154674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.154869  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.651414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.654802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.929904  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51454475s)
	I0916 23:49:14.929925  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.05417803s)
	W0916 23:49:14.929968  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:14.929993  522590 retry.go:31] will retry after 724.402782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:15.113335  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:15.152058  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.155353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.155409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:15.651139  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.655103  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:15.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.152053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.155268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.155481  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:16.233482  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.233517  522590 retry.go:31] will retry after 528.645422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.654976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.763126  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:17.152861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.155374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:17.346237  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:17.346292  522590 retry.go:31] will retry after 1.241721728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:17.613291  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:17.637138  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 23:49:17.637240  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.651912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.655594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:17.659459  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.770859  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 23:49:17.790444  522590 addons.go:238] Setting addon gcp-auth=true in "addons-069011"
	I0916 23:49:17.790517  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:17.790880  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:17.810255  522590 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 23:49:17.810334  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.829504  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.924366  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:17.925772  522590 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0916 23:49:17.926989  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 23:49:17.927016  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 23:49:17.947928  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 23:49:17.947963  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 23:49:17.968887  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:17.968910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 23:49:17.988471  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:18.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.155501  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.155799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.360333  522590 addons.go:479] Verifying addon gcp-auth=true in "addons-069011"
	I0916 23:49:18.361695  522590 out.go:179] * Verifying gcp-auth addon...
	I0916 23:49:18.364169  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 23:49:18.367024  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 23:49:18.367044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:18.588324  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:18.652355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.654775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.655329  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.867741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:19.151755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.154903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.154930  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:19.161345  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.161383  522590 retry.go:31] will retry after 2.165570319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:19.614026  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:19.652152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.655765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.655827  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:19.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.151387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.154666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.154897  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.368600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.651411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.655000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.655011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.868027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.151730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.155244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.155464  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.327698  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:21.367411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.650905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.655659  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.655769  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.867968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:21.902069  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:21.902100  522590 retry.go:31] will retry after 1.920767743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:22.113269  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:22.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.154840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:22.651563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.655020  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.868412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.151599  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.155033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.155245  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.367616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.651422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.654714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.654854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.823078  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:23.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.113772  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:24.152012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.155306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.155536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.367843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.396574  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.396608  522590 retry.go:31] will retry after 5.249600328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.651892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.655528  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.868048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.152228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.155056  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.368598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.651661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.655231  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.655269  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.867507  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:26.151287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.155745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.155923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.368083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:26.612894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:26.652086  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.655500  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.867894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.151727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.155077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.368077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.652080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.655544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:28.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.155039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.155194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.367271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:28.613247  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:28.652605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.654553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.868444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.151120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.155325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.155404  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.367903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.646635  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:29.651947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.655369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.655591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.868090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.151994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:30.222879  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.222909  522590 retry.go:31] will retry after 6.679975361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.368039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.655141  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.655354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:30.867036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:31.112894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:31.151818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.155291  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.367578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:31.651196  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.655723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.655764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.867818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.152173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.155965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.156115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.367078  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.652733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.655287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.655347  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.867604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:33.113866  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:33.151850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.155462  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.367548  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:33.651173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.655487  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.655550  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.151692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.154752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.154822  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.367980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.652127  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.655730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.868271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:35.151839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.155765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.155925  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.368376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:35.613366  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:35.651791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.655929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.656002  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.868276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.152007  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.155379  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.367593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.652140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.655627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.655826  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.903759  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:37.152322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.155245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.367621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:37.484516  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:37.484552  522590 retry.go:31] will retry after 4.853736845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:37.613755  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:37.651588  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.654987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.655126  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.867377  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.154847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.155074  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.651724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.655174  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.867641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:39.151291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:39.613957  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:39.652056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.655324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.655427  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.155213  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.155515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.367629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.652268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.655504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.655716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.867786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.151908  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.155026  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.155219  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.367009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.652274  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.654993  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.868497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.113784  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:42.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.156178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.156253  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.339312  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:42.368085  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:42.653863  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.656534  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.656609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.867016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.931965  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:42.932013  522590 retry.go:31] will retry after 9.201032876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:43.151738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.155452  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.157165  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.367931  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:43.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.655792  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.655791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.868283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:44.151192  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.155952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.156077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.368187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:44.612897  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:44.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.655165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.867416  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.155365  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.155527  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.367088  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.652905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.655224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.655382  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.867470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:46.152562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.155553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.155698  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.367899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:46.613967  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:46.652183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.867721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.155062  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.155242  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.367292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.652156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.655812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.656147  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.867423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.152152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.155526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.155678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.367871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.651966  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.655104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.655456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:49.113864  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:49.151422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.154601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.154659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.368059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:49.651895  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.655081  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.655227  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.867193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.154433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.154532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.367752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.614048  522590 node_ready.go:49] node "addons-069011" is "Ready"
	I0916 23:49:50.614124  522590 node_ready.go:38] duration metric: took 40.004018622s for node "addons-069011" to be "Ready" ...
	I0916 23:49:50.614142  522590 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:49:50.614260  522590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:49:50.634002  522590 api_server.go:72] duration metric: took 40.737149121s to wait for apiserver process to appear ...
	I0916 23:49:50.634037  522590 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:49:50.634066  522590 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:49:50.639530  522590 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:49:50.640709  522590 api_server.go:141] control plane version: v1.34.0
	I0916 23:49:50.640743  522590 api_server.go:131] duration metric: took 6.69752ms to wait for apiserver health ...
	I0916 23:49:50.640754  522590 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:49:50.645035  522590 system_pods.go:59] 20 kube-system pods found
	I0916 23:49:50.645109  522590 system_pods.go:61] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.645119  522590 system_pods.go:61] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.645126  522590 system_pods.go:61] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.645131  522590 system_pods.go:61] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.645134  522590 system_pods.go:61] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.645138  522590 system_pods.go:61] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.645141  522590 system_pods.go:61] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.645146  522590 system_pods.go:61] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.645150  522590 system_pods.go:61] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.645156  522590 system_pods.go:61] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.645165  522590 system_pods.go:61] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.645171  522590 system_pods.go:61] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.645182  522590 system_pods.go:61] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.645192  522590 system_pods.go:61] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.645206  522590 system_pods.go:61] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.645211  522590 system_pods.go:61] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.645217  522590 system_pods.go:61] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.645222  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645231  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645238  522590 system_pods.go:61] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.645253  522590 system_pods.go:74] duration metric: took 4.491675ms to wait for pod list to return data ...
	I0916 23:49:50.645267  522590 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:49:50.649832  522590 default_sa.go:45] found service account: "default"
	I0916 23:49:50.649863  522590 default_sa.go:55] duration metric: took 4.587184ms for default service account to be created ...
	I0916 23:49:50.649876  522590 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:49:50.651240  522590 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:50.651263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.653416  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.653453  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.653463  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.653471  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.653478  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.653507  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.653517  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.653523  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.653531  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.653541  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.653553  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.653564  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.653570  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.653577  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.653586  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.653604  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.653610  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.653621  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.653630  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653641  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653649  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.653671  522590 retry.go:31] will retry after 286.454663ms: missing components: kube-dns
	I0916 23:49:50.654669  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:50.654689  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.655263  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.970963  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.971008  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.971021  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.971032  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:50.971040  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:50.971049  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:50.971060  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.971067  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.971075  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.971081  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.971093  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.971098  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.971107  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.971115  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.971127  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.971139  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.971149  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:50.971487  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.971519  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971529  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971537  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.971560  522590 retry.go:31] will retry after 250.710433ms: missing components: kube-dns
	I0916 23:49:51.152661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.154830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.154922  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.227146  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.227184  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.227191  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:51.227200  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.227206  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.227213  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.227219  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.227223  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.227226  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.227230  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.227235  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.227241  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.227244  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.227250  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.227256  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.227261  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.227265  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.227272  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.227277  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227286  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227292  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:51.227310  522590 retry.go:31] will retry after 293.334556ms: missing components: kube-dns
	I0916 23:49:51.368304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:51.526481  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.526535  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.526545  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Running
	I0916 23:49:51.526559  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.526572  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.526582  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.526589  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.526595  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.526601  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.526608  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.526618  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.526623  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.526629  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.526635  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.526645  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.526690  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.526699  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.526714  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.526722  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526731  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526737  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Running
	I0916 23:49:51.526755  522590 system_pods.go:126] duration metric: took 876.872082ms to wait for k8s-apps to be running ...
	I0916 23:49:51.526767  522590 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:49:51.526834  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:49:51.543571  522590 system_svc.go:56] duration metric: took 16.790922ms WaitForService to wait for kubelet
	I0916 23:49:51.543604  522590 kubeadm.go:578] duration metric: took 41.646760707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:49:51.543633  522590 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:49:51.546804  522590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:49:51.546832  522590 node_conditions.go:123] node cpu capacity is 8
	I0916 23:49:51.546851  522590 node_conditions.go:105] duration metric: took 3.210939ms to run NodePressure ...
	I0916 23:49:51.546866  522590 start.go:241] waiting for startup goroutines ...
	I0916 23:49:51.653201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.655460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.655502  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.867905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.133215  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:52.152421  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.155318  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:52.367901  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.651612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:52.780604  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.780644  522590 retry.go:31] will retry after 11.236841486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.867960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.152499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.155690  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.369120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.653294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.655366  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.867612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.152263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.154786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.154825  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.368535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.651809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.655532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.655654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.868318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.152216  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.154997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.155198  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.368885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.652607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.868072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.153735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.155961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.156369  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.367288  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.654554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.654654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.867827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.152232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.154799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.154814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.368344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.651690  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.655166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.655327  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.867912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.152149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.155593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.155720  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.367868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.654817  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.867989  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.154848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.154899  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.368414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.651849  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.655048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.655193  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.866961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.152429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.154913  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.154932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.367821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.652008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.655477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.867460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.152318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.155248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.155323  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.651746  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.655519  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.655601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.867766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.152212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.154600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.154831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.367336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.655315  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.655331  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.154749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.154818  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.651319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.655739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.655966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.868159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:04.018435  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:04.151970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.155986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.156204  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.367594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:04.598781  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.598815  522590 retry.go:31] will retry after 23.829016694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.655382  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.867585  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.151943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.367838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.652819  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.868265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.151902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.155278  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.367335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.651933  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.655376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.867544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.151927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.155463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.155566  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.367946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.652554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.655150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.655250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.867104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.154867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.154932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.367820  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.652108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.655674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.867488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.151318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.155660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.155771  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.368018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.652352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.867979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.154744  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.367888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.652342  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.868023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.152284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.154741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.154823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.368224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.651602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.654730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.655430  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.152453  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.155233  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.367898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.652236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.654831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.654839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.868375  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.151282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.155678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.155786  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.368346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.652132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.655641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.655658  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.867735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.152048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.155624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.367645  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.651952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.655433  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.867300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.151804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.155275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.155321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.367103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.651754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.655590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.655740  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.868629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.155556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.155585  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.367279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.651583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.655042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.867499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.151753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.154889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.368258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.651448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.655920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.655988  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.868165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.155157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.368301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.654851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.655022  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.868093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.154885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.154951  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.368636  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.651987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.655509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.655549  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.867433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.154985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.155048  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.368109  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.651638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.654894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.654923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.867870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.155357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.155505  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.368035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.652897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.656101  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.656100  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.152943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.155198  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.367576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.655870  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.867990  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.152723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.155609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.155624  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.653531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.655283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.867298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.151888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.155832  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.155956  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.373346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.652179  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.655942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.656079  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.867787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.152745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.156266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.156485  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.367952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.655819  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.867860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.153299  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.155510  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.155645  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.367671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.655448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.655652  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.867254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.151981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.156009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.156850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.367744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.654351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.656634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.656737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.868098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.153435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.156745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.156944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.367835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.428940  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:28.651949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.655492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.655714  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.866833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:29.128531  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.128569  522590 retry.go:31] will retry after 40.39789771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.154066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.156666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.156872  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.367799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:29.652238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.654780  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.655095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.867922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.152458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.155006  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.155093  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.367812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.652850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.867340  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.151917  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.155386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.155417  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.367531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.653268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.657791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.657831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.868270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.155469  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.157902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.158614  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.368334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.656126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.656171  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.155033  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.156187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.366965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.655162  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.655350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.868673  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.152675  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.155008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.155063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.368239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.655185  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.867899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.152626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.155359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.155446  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.367305  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.652378  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.655807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.868004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.152291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.155228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.155274  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.367904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.652666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.655056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.868245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.153660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.155936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.156021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.367947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.652965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.654916  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.654970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.867352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.152079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.155581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.155593  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.652943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.655717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.868640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.152316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.155082  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.155138  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.368233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.651993  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.654885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.655026  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.868217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.152059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.155525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.155590  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.367907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.152251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.154763  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.367545  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.654751  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.868012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.154862  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.154889  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.368681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.652243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.654497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.654707  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.152560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.156124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.156157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.367649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.652430  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.654968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.654986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.867477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.151715  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.154833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.154926  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.368003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.652097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.655411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.655482  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.151785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.155294  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.367710  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.652316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.654798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.654835  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.867771  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.151940  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.155638  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.367470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.652017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.655632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.655678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.152166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.155566  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.155778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.653210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.655490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.655647  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.867856  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.152084  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.155486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.155488  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.367425  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.654912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.151097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.155642  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.155716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.652527  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.654528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.654540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.867508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.152341  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.367631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.651795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.654967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.655191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.867951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.155228  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.368136  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.654278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.658434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.658602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.867554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.151825  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.154981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.155043  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.368227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.651587  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.654841  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.868253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.151568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.154906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.368332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.652244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.654695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.654772  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.867872  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.152199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.155137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.367783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.652699  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.654783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.654979  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.868132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.152259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.154768  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.367668  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.652881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.655002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.655049  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.868381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.151518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.367620  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.651888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.655083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.655175  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.868708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.152144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.155438  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.155487  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.367472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.652234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.654836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.654874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.867903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.152561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.154532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.154668  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.367739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.655541  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.867577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.155130  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.368654  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.652953  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.654943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.654982  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.868114  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.151581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.155143  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.368473  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.651816  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.655282  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.867147  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.151121  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.155456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.367218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.651621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.654783  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.152018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.155576  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.367896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.655222  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.655273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.867265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.151348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.156159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.156250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.367497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.652167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.655608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.655715  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.867725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.155471  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.155479  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.367579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.652472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.867055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.153048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.155508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.155556  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.367853  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.653083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.655046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.655090  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.867138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.152134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.155674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.367789  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.652335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.654809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.654932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.868697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.152531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.154911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.154955  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.370805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.652428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.654916  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.868557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.151860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.155090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.155145  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.367368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.651698  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.654852  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.868069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.151519  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.154937  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.154942  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.368515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.526750  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:51:09.652541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.655572  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.655659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.868054  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:51:10.098163  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:51:10.098324  522590 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0916 23:51:10.152880  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.154875  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.367834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:10.652251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.655021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.655084  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.867384  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.151842  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.155150  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.368186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.652269  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.655256  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.867128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.152667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.155107  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.652518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.654870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.867312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.151982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.155271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.155332  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.367823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.652387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.654951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.868844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.153334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.155643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.155904  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.368482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.652515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.655724  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.152601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.155604  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.652539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.655836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.655906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.868440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.151573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.154807  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.368168  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.652042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.655560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.655747  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.151965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.155140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.155210  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.368464  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.652037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.655823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.867935  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.152022  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.155517  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.367482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.651927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.654865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.655024  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.868282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.151370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.155878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.155924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.651943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.868827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.151845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.155066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.155072  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.369339  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.651811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.654774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.654963  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.867983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.152276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.154893  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.367794  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.652538  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.654934  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.654939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.867898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.151949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.155295  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.155445  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.367407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.651590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.655019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.867887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.152190  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.155502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.155545  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.367753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.652562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.654651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.152073  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.155610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.367957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.868057  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.152408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.155409  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.155602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.652052  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.655209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.655312  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.151535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.155823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.155856  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.651651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.654990  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.867537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.152091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.155112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.155142  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.654137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.656355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.656515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.869096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.154581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.154673  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.367987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.652294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.654753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.654853  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.869651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.154807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.154850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.368887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.654241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.655196  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.151919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.155232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.155296  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.367463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.867385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.151552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.154871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.154947  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.369090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.652787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.654631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.869965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.152268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.154797  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.154858  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.368137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.654729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.654778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.868357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.151932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.155182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.155339  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.367560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.651975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.655413  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.867981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.152479  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.155002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.155059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.368688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.651549  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.655000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.655063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.868189  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.151809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.155205  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.155350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.367322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.651627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.752333  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.752426  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.868016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.155466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.368191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.654883  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.868252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.152153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.155806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.155969  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.368131  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.652021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.655754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.655968  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.869697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.152009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.155144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.155151  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.369995  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.652185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.655536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.655553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.867639  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.151740  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.154964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.155029  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.368608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.651802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.654961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.869716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.152077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.155323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.155354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.367481  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.651750  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.655154  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.867047  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.152227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.154790  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.154936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.367727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.655578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.655618  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.869685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.152239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.154748  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.367986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.654796  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.868157  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.151984  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.155093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.155268  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.367574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.652278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.867108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.151635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.155169  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.367632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.656348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.656416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.867492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.155082  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.368046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.652581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.655278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.655440  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.867304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.151985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.155139  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.367275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.652201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.654659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.654708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.867813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.152102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.368132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.652347  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.654903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.654929  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.868615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.151762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.154894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.155015  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.367728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.652716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.655105  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.655114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.867844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.151899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.367647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.651960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.655182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.867701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.152323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.368036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.652752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.655140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.867998  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.152002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.155125  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.155152  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.652049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.655522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.655726  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.868294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.151791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.155565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.367865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.652161  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.655672  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.868579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.151650  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.154924  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.155034  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.369092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.651132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.655513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.655522  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.868691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.152450  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.155354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.155524  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.367600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.651882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.655373  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.655408  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.867056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.152214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.154682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.154691  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.367828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.652289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.654838  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.654919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.868482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.155680  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.367605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.652000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.655628  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.867754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.152556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.155095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.367975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.654741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.868401  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.153486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.155941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.156005  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.652886  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.654744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.867833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.152068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.155056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.368282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.651560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.654879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.654906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.868124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.151834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.368228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.654864  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.655039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.152355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.155216  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.155250  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.367206  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.651490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.655688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.655736  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.868528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.152001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.155683  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.367787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.652284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.654849  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.868355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.151870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.155589  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.369165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.655412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.655514  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.867952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.152595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.154738  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.154768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.368177  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.651492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.654766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.654890  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.867847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.155407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.155591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.367682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.652426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.655066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.655077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.868692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.151879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.154999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.368983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.652433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.655105  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.867405  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.151744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.651596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.654914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.655059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.868458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.152215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.154616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.154655  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.367845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.652783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.655112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.655120  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.151544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.155208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.155226  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.367504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.652199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.152537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.155961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.155972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.652499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.655560  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.153765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.156270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.156301  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.367137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.652938  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.655212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.655254  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.867526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.152762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.155539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.155611  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.367745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.653490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.655575  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.867930  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.152233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.154692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.154928  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.368718  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.655076  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.868860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.152353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.154742  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.367623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.655140  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.867455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.151851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.155247  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.367164  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.652193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.655452  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.655496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.867913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.152181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.155667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.155764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.368289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.651762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.654913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.654985  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.868273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.152523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.155730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.156762  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.369278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.653153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.656847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.656957  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.872367  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.152950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.155208  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.368554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.652083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.656110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.656132  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.867845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.152657  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.155336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.155360  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.367646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.652603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.655013  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.655062  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.868632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.151907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.155327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.155416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.367287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.651614  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.654920  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.867932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.155722  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.367894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.652307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.654756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.654995  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.869050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.151999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.155129  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.155241  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.367234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.655801  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.867063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.152370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.368226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.651514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.654966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.654979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.867379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.152074  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.155478  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.155627  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.367613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.651861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.655241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.655314  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.867408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.151695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.155047  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.368563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.655425  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.867208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.151957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.156991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.157177  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.367383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.651982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.655413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.655465  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.867368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.151925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.154970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.155019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.368160  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.651611  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.654847  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.654859  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.867942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.152874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.154630  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.154694  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.368049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.651257  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.655624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.655667  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.867801  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.152524  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.156020  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.156108  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.651663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.655003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.655207  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.867344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.152248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.154952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.155114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.368836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.652345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.868484  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.151558  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.154855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.154863  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.368442  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.651568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.655180  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.868266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.151815  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.155240  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.367272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.651711  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.655134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.655194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.867490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.151598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.155259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.367609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.651854  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.655324  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.867858  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.153080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.155098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.155341  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.367674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.651945  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.655335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.655353  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.151897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.155637  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.155683  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.367456  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.652090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.655528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.655648  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.152606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.154994  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.368455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.652303  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.655073  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.867363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.151724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.155569  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.367351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.651839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.655606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.868338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.152142  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.155217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.155532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.368358  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.651898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.655540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.655567  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.868334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.154861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.154907  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.368768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.652068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.655573  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.869959  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.152619  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.154675  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.367925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.868289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.152483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.154991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.155032  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.368359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.651646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.655296  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.867137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.152187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.155835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.155854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.367912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.652016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.655327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.867319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.151608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.154828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.155016  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.368488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.653811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.656445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.656565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.867120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.152791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.154576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.154723  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.367602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.651437  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.655676  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.867828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.152180  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.155737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.155763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.367992  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.652246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.654603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.868092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.152800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.154702  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.154910  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.367595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.654693  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.867547  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.151877  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.155211  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.367273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.651756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.655367  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.867318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.151786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.155034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.155115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.651521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.655726  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.655766  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.868163  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.151496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.155243  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.366955  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.652531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.655173  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.655184  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.867097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.152201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.155505  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.155636  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.367562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.651843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.655301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.655384  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.868028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.152914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.155252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.155462  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.367149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.651713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.655354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.655450  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.867440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.151891  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.368461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.901721  522590 kapi.go:107] duration metric: took 3m34.537544348s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 23:52:52.906543  522590 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-069011 cluster.
	I0916 23:52:52.912324  522590 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 23:52:52.913737  522590 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 23:52:53.153197  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:53.155666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.652828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.655014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.655110  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.152324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.155476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.155496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.655581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.655609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.152128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.155885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.156039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.652641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.654978  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.152674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.154874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.155000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.652035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.655457  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.655496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:57.152186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.155542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:57.155561  522590 kapi.go:107] duration metric: took 3m45.503354476s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 23:52:57.652350  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.655498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.154850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.652665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.152543  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.154283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.653277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.659941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.152852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.154649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.652327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.654800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.154525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.651817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.655138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.653502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.656037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.151857  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.155055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.652334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.152174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.155870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.653124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.153568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.155625  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.653230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.655236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.152361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.154928  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.653059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.656200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.152336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.652346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.655712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.653610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.152628  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.154934  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.655144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.154348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.155986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.652369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.152148  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.155670  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.652553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.655243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.152796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.155106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.651747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.655634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.153010  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.155374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.654738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.656482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.152952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.652523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.152364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.155721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.655954  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.656795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.152967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.154926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.653027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.153039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.653034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.156123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.651828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.151648  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.652222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.654551  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.655101  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.651672  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.655009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.152329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.652063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.655272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.152182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.155422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.652218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.654560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.152574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.155253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.652502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.151663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.155115  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.655044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.152383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.155509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.652354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.654747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.169011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.169001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.653424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.655714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.152979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.254144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.651804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.655470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.151827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.155108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.652422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.152193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.155976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.652210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.654980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.151709  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.155038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.651589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.655050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.151868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.155145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.652363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.655892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.151643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.154810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.653583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.655279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.153153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.155522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.652584  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.151580  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.156561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.652732  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.655133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.155361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.158601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.652275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.654674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.153755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.155714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.652926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.151466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.154733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.653313  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.655745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.152234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.155638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.652445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.654541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.152461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.652312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.654686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.155170  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.651644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.152309  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.154360  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.654550  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.151904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.154960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.652091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.655542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.151570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.652708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.654522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.151593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.154608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.651922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.151376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.155482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.151782  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.154824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.652429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.152137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.154936  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.651792  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.654929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.152207  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.652077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.655059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.152055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.155283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.654677  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.152004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.154803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.653046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.654923  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.154978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.651950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.654986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.151595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.154725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.652661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.654540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.155079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.652239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.654476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.151772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.155226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.655124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.151415  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.155604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.652777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.152275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.155829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.653025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.152978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.154716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.652635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.152070  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.155270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.652577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.655424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.152756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.154426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.651964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.655181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.151369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.155561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.651593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.654586  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.152252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.654423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.152030  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.155167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.651855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.654881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.151556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.654500  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.152255  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.154344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.652483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.655325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.151729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.154664  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.652904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.654681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.152267  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.652291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.151577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.154865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.654618  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.152302  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.154688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.653092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.654963  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.151758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.154735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.652999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.154498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.654909  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.151298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.155557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.652643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.654491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.152751  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.652126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.655183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.151763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.155046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.152658  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.154758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.652985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.655060  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.151705  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.154775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.652773  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.654589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.152592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.155097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.651889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.152217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.652903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.152686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.154506  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.652260  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.654251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.154777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.652915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.152381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.155278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.651555  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.152695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.652919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.151929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.155096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.652215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.654600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.152243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.154806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.655336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.151915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.154836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.152467  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.653379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.655466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.151800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.155291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.653102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.153140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.654838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.153210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.155329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.152491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.154729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.653037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.654741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.152830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.154474  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.652230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.654509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.151920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.154827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.653191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.655219  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.151306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.155960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.651717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.655110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.152304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.154575  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.652514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.654778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.652961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.654330  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.655691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.152418  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.154851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.651435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.654582  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.153087  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.155042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.654583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.152997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.154432  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.652600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.152066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.651875  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.655064  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.152238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.154411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.655370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.152256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.154799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.652896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.655256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.152778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.154615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.652772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.654597  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.152798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.155091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.652248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.654728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.152282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.154468  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.652120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.655482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.151671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.653242  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.654823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.152812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.152839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.155119  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.652214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.654840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.152996  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.155254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.651623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.153897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.155803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.652443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.654867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.152374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.154640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.653033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.654888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.152649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.154604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.652521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.654615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.152209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.154579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.652590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.654414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.651951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.655307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.151878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.651739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.654805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.152326  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.154364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.654812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.152821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.154939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.651434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.152103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.155132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.655072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.154539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.155149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.654796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.151638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.154787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.652885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.152069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.652069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.655407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.152172  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.156173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.652301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.654808  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.153293  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.155684  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.652844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.654749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.652609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.151757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.652511  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.654688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.152258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.154829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.653049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.151579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.154591  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.652331  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.654994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.151784  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.154921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.655067  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.151900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.155072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.651978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.655300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.151961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.154914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.654644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.152090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.155188  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.652025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.652821  522590 kapi.go:107] duration metric: took 6m0.000625805s to wait for kubernetes.io/minikube-addons=registry ...
	W0916 23:55:11.652991  522590 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0916 23:55:12.148606  522590 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0916 23:55:12.148655  522590 kapi.go:107] duration metric: took 6m0.000415083s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0916 23:55:12.148771  522590 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0916 23:55:12.151062  522590 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, volumesnapshots, gcp-auth, ingress
	I0916 23:55:12.152575  522590 addons.go:514] duration metric: took 6m2.25568849s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher cloud-spanner metrics-server yakd volumesnapshots gcp-auth ingress]
	I0916 23:55:12.152638  522590 start.go:246] waiting for cluster config update ...
	I0916 23:55:12.152661  522590 start.go:255] writing updated cluster config ...
	I0916 23:55:12.152955  522590 ssh_runner.go:195] Run: rm -f paused
	I0916 23:55:12.157549  522590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:12.161141  522590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.165703  522590 pod_ready.go:94] pod "coredns-66bc5c9577-m872b" is "Ready"
	I0916 23:55:12.165731  522590 pod_ready.go:86] duration metric: took 4.567019ms for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.168067  522590 pod_ready.go:83] waiting for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.172550  522590 pod_ready.go:94] pod "etcd-addons-069011" is "Ready"
	I0916 23:55:12.172583  522590 pod_ready.go:86] duration metric: took 4.489308ms for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.174872  522590 pod_ready.go:83] waiting for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.179401  522590 pod_ready.go:94] pod "kube-apiserver-addons-069011" is "Ready"
	I0916 23:55:12.179432  522590 pod_ready.go:86] duration metric: took 4.532992ms for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.181473  522590 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.561817  522590 pod_ready.go:94] pod "kube-controller-manager-addons-069011" is "Ready"
	I0916 23:55:12.561846  522590 pod_ready.go:86] duration metric: took 380.349392ms for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.763149  522590 pod_ready.go:83] waiting for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.161850  522590 pod_ready.go:94] pod "kube-proxy-v85kq" is "Ready"
	I0916 23:55:13.161880  522590 pod_ready.go:86] duration metric: took 398.696904ms for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.362802  522590 pod_ready.go:83] waiting for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761895  522590 pod_ready.go:94] pod "kube-scheduler-addons-069011" is "Ready"
	I0916 23:55:13.761929  522590 pod_ready.go:86] duration metric: took 399.094008ms for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761944  522590 pod_ready.go:40] duration metric: took 1.604356273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:13.810173  522590 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:55:13.812279  522590 out.go:179] * Done! kubectl is now configured to use "addons-069011" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 16 23:59:59 addons-069011 crio[933]: time="2025-09-16 23:59:59.174670840Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=1966db86-1f15-42e4-8581-e2db1138357e name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:02 addons-069011 crio[933]: time="2025-09-17 00:00:02.174748781Z" level=info msg="Checking image status: docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" id=547d0c06-9a95-4cbf-9b72-e1b41ff74dc2 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:02 addons-069011 crio[933]: time="2025-09-17 00:00:02.175059830Z" level=info msg="Image docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 not found" id=547d0c06-9a95-4cbf-9b72-e1b41ff74dc2 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:07 addons-069011 crio[933]: time="2025-09-17 00:00:07.523600394Z" level=info msg="Pulling image: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=da117300-2943-4656-8124-3a361dcb2b16 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:00:07 addons-069011 crio[933]: time="2025-09-17 00:00:07.527443162Z" level=info msg="Trying to access \"docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\""
	Sep 17 00:00:14 addons-069011 crio[933]: time="2025-09-17 00:00:14.176045393Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=f1a479e8-41e9-4dd1-b0eb-74feda8217e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:14 addons-069011 crio[933]: time="2025-09-17 00:00:14.176221722Z" level=info msg="Checking image status: docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" id=dd6a2f3a-8174-4084-aa1a-13f00234d6ba name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:14 addons-069011 crio[933]: time="2025-09-17 00:00:14.176349594Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=f1a479e8-41e9-4dd1-b0eb-74feda8217e7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:14 addons-069011 crio[933]: time="2025-09-17 00:00:14.176522762Z" level=info msg="Image docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 not found" id=dd6a2f3a-8174-4084-aa1a-13f00234d6ba name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:20 addons-069011 crio[933]: time="2025-09-17 00:00:20.174738013Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1376d712-6bd7-40d1-b6db-eb2b4c9474b3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:20 addons-069011 crio[933]: time="2025-09-17 00:00:20.175018494Z" level=info msg="Image docker.io/nginx:alpine not found" id=1376d712-6bd7-40d1-b6db-eb2b4c9474b3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:27 addons-069011 crio[933]: time="2025-09-17 00:00:27.174032824Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=be12a202-4a28-4efb-9396-c5867f879fd8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:27 addons-069011 crio[933]: time="2025-09-17 00:00:27.174407147Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=be12a202-4a28-4efb-9396-c5867f879fd8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:29 addons-069011 crio[933]: time="2025-09-17 00:00:29.174481423Z" level=info msg="Checking image status: docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" id=fecf4475-8447-48e3-ac20-8505d7bfae23 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:29 addons-069011 crio[933]: time="2025-09-17 00:00:29.174766529Z" level=info msg="Image docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 not found" id=fecf4475-8447-48e3-ac20-8505d7bfae23 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:34 addons-069011 crio[933]: time="2025-09-17 00:00:34.175572005Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=82a11abf-3b79-4f37-8a04-ff05443414c4 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:34 addons-069011 crio[933]: time="2025-09-17 00:00:34.175871520Z" level=info msg="Image docker.io/nginx:alpine not found" id=82a11abf-3b79-4f37-8a04-ff05443414c4 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:37 addons-069011 crio[933]: time="2025-09-17 00:00:37.960707757Z" level=info msg="Pulling image: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=88700966-f58e-4767-8e0f-aa8cd0bef169 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:00:37 addons-069011 crio[933]: time="2025-09-17 00:00:37.964707392Z" level=info msg="Trying to access \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\""
	Sep 17 00:00:38 addons-069011 crio[933]: time="2025-09-17 00:00:38.174532502Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=e4d30bf2-df04-48b1-8fa2-f5f131f13857 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:38 addons-069011 crio[933]: time="2025-09-17 00:00:38.174861836Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=e4d30bf2-df04-48b1-8fa2-f5f131f13857 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:41 addons-069011 crio[933]: time="2025-09-17 00:00:41.175008932Z" level=info msg="Checking image status: docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624" id=39ed9030-e3f2-40f7-99b8-cedaccd6c0a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:41 addons-069011 crio[933]: time="2025-09-17 00:00:41.175385363Z" level=info msg="Image docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 not found" id=39ed9030-e3f2-40f7-99b8-cedaccd6c0a5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:50 addons-069011 crio[933]: time="2025-09-17 00:00:50.174365350Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=ac04b815-b6e0-4325-8da7-34fae2d79c87 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:00:50 addons-069011 crio[933]: time="2025-09-17 00:00:50.174683773Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=ac04b815-b6e0-4325-8da7-34fae2d79c87 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8fc15d8cb7dd5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago       Running             csi-snapshotter                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	295b9edc02db1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	3bebfc3ce5f89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          4 minutes ago       Running             busybox                                  0                   b34e9dc849123       busybox
	0994d530b2186       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            4 minutes ago       Running             liveness-probe                           0                   e614fc1047195       csi-hostpathplugin-s98vb
	d78ede218b3d9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   e614fc1047195       csi-hostpathplugin-s98vb
	16a4495ac9a55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   e614fc1047195       csi-hostpathplugin-s98vb
	ab63cb98da9fa       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             7 minutes ago       Running             controller                               0                   1c8433f3bdf68       ingress-nginx-controller-9cc49f96f-4m84v
	cb0aaa55cf5e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            8 minutes ago       Running             gadget                                   0                   38b62a86f7523       gadget-g862x
	75b35093f1f14       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              9 minutes ago       Running             registry-proxy                           0                   f2e835ff4c172       registry-proxy-gtpv9
	af48fae595f24       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   7daa29e729a88       snapshot-controller-7d9fbc56b8-st98r
	fce1ccd8d33b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   9 minutes ago       Running             csi-external-health-monitor-controller   0                   e614fc1047195       csi-hostpathplugin-s98vb
	87609248fc31a       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               10 minutes ago      Running             cloud-spanner-emulator                   0                   843001c23149a       cloud-spanner-emulator-85f6b7fc65-wtp6g
	0e4759a430832       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             10 minutes ago      Exited              patch                                    2                   0937f6f98ea11       ingress-nginx-admission-patch-sp7zb
	3c653d4c50b5c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   4be25aad82a4e       snapshot-controller-7d9fbc56b8-s7m82
	11ae5f470bf10       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   10 minutes ago      Exited              create                                   0                   d933a3ae75df0       ingress-nginx-admission-create-wj8lw
	0957eacca23bd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   b8131d2ee78de       csi-hostpath-resizer-0
	ad4a09c21105c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   15f9a9c33b53e       csi-hostpath-attacher-0
	c1b11b9e2fae1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             10 minutes ago      Running             local-path-provisioner                   0                   be69758a594c2       local-path-provisioner-648f6765c9-4qs6g
	7d0db99be084d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             10 minutes ago      Running             storage-provisioner                      0                   e26878809420e       storage-provisioner
	b62ac7b1e2d93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             10 minutes ago      Running             coredns                                  0                   90cd65a058e3e       coredns-66bc5c9577-m872b
	81f4db589dfd0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             11 minutes ago      Running             kindnet-cni                              0                   282dceccf27e4       kindnet-hn7tx
	8204c89cdc90d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             11 minutes ago      Running             kube-proxy                               0                   076ce47b67764       kube-proxy-v85kq
	d1d2d3ef1a2d6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             11 minutes ago      Running             kube-controller-manager                  0                   2befa508c819b       kube-controller-manager-addons-069011
	f4991aa96dbe9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             11 minutes ago      Running             kube-apiserver                           0                   24f1de8dafedd       kube-apiserver-addons-069011
	ecbc264153ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             11 minutes ago      Running             kube-scheduler                           0                   3af000cb5a57c       kube-scheduler-addons-069011
	5a81076e6d9a8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             11 minutes ago      Running             etcd                                     0                   f590790ed13d4       etcd-addons-069011
	
	
	==> coredns [b62ac7b1e2d935063ca8c0594642886e49ad0423507f04d148e7bd385ca935ce] <==
	[INFO] 10.244.0.16:47608 - 63622 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.00663405s
	[INFO] 10.244.0.16:47608 - 7408 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.00009689s
	[INFO] 10.244.0.16:47608 - 60567 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000133977s
	[INFO] 10.244.0.16:47608 - 48871 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000107636s
	[INFO] 10.244.0.16:47608 - 27918 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000180048s
	[INFO] 10.244.0.16:47608 - 33477 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000081592s
	[INFO] 10.244.0.16:47608 - 23485 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000100954s
	[INFO] 10.244.0.16:47608 - 54763 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000189517s
	[INFO] 10.244.0.16:47608 - 65137 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000192931s
	[INFO] 10.244.0.16:45704 - 34960 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000190704s
	[INFO] 10.244.0.16:45704 - 64017 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.0002087s
	[INFO] 10.244.0.16:45704 - 40411 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000138237s
	[INFO] 10.244.0.16:45704 - 2089 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000183365s
	[INFO] 10.244.0.16:45704 - 24778 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000128781s
	[INFO] 10.244.0.16:45704 - 61480 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000176896s
	[INFO] 10.244.0.16:45704 - 23331 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004230158s
	[INFO] 10.244.0.16:45704 - 62550 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004922821s
	[INFO] 10.244.0.16:45704 - 1866 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000089533s
	[INFO] 10.244.0.16:45704 - 20615 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000075261s
	[INFO] 10.244.0.16:45704 - 679 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000090031s
	[INFO] 10.244.0.16:45704 - 32071 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000126618s
	[INFO] 10.244.0.16:45704 - 19100 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000069918s
	[INFO] 10.244.0.16:45704 - 8658 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000070071s
	[INFO] 10.244.0.16:45704 - 64897 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000152016s
	[INFO] 10.244.0.16:45704 - 27833 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000138535s
	
	
	==> describe nodes <==
	Name:               addons-069011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-069011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-069011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-069011
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-069011"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:49:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-069011
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:00:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-069011
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e6a06e1e17043f19f3b8f5ea0927359
	  System UUID:                fa23b867-4022-409a-8baa-bf981ffedafe
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  default                     cloud-spanner-emulator-85f6b7fc65-wtp6g     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  gadget                      gadget-g862x                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-4m84v    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         11m
	  kube-system                 amd-gpu-device-plugin-flfw9                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-m872b                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-s98vb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-069011                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-hn7tx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-069011                250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-069011       200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-v85kq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-069011                100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 registry-66898fdd98-bl4r5                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 registry-proxy-gtpv9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-s7m82        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-st98r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-648f6765c9-4qs6g     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-pl9vq              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             438Mi (1%)  476Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node addons-069011 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node addons-069011 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node addons-069011 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m   node-controller  Node addons-069011 event: Registered Node addons-069011 in Controller
	  Normal  NodeReady                11m   kubelet          Node addons-069011 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [5a81076e6d9a8c9983866e09b1190810cd0059c34edeae1a479f9d18f3003a91] <==
	{"level":"warn","ts":"2025-09-16T23:49:00.991705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:00.999124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.014667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.021210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.034514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.041663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.048524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.061680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.068240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.075225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.081757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.105206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.111554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.154896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.666348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.673196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.575058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.581784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.598000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.605378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:59:00.630787Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1449}
	{"level":"info","ts":"2025-09-16T23:59:00.656834Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1449,"took":"25.282457ms","hash":3232880921,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3645440,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-09-16T23:59:00.656898Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3232880921,"revision":1449,"compact-revision":-1}
	
	
	==> kernel <==
	 00:00:50 up  2:43,  0 users,  load average: 0.40, 5.74, 33.45
	Linux addons-069011 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [81f4db589dfd0f8f014a7fc056f2d7f752ecc52737aea10ae2f8a98d0242428b] <==
	I0916 23:58:50.184580       1 main.go:301] handling current node
	I0916 23:59:00.185628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0916 23:59:00.185695       1 main.go:301] handling current node
	I0916 23:59:10.184064       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0916 23:59:10.184106       1 main.go:301] handling current node
	I0916 23:59:20.192428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0916 23:59:20.192469       1 main.go:301] handling current node
	I0916 23:59:30.184242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0916 23:59:30.184287       1 main.go:301] handling current node
	I0916 23:59:40.184201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0916 23:59:40.184242       1 main.go:301] handling current node
	I0916 23:59:50.184930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0916 23:59:50.184968       1 main.go:301] handling current node
	I0917 00:00:00.185865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:00:00.185906       1 main.go:301] handling current node
	I0917 00:00:10.183969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:00:10.184027       1 main.go:301] handling current node
	I0917 00:00:20.186069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:00:20.186314       1 main.go:301] handling current node
	I0917 00:00:30.193384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:00:30.193451       1 main.go:301] handling current node
	I0917 00:00:40.184852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:00:40.184895       1 main.go:301] handling current node
	I0917 00:00:50.185931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:00:50.185986       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4991aa96dbe98af7f934784cdc7973d5aabec72325938f0e98ad8efde3d06e3] <==
	I0916 23:49:55.416807       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0916 23:50:10.159066       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:50:13.576455       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:51:26.206498       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:51:40.656101       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:52:37.080075       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:53:06.365528       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:53:51.505661       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:54:19.846477       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:21.099421       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:29.068080       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:56:24.856015       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0916 23:56:38.562764       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43110: use of closed network connection
	E0916 23:56:38.758708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43158: use of closed network connection
	I0916 23:56:47.547088       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 23:56:47.750812       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.94.177"}
	I0916 23:56:48.077381       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.184.141"}
	I0916 23:56:56.387694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0916 23:56:58.875443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:57:28.517320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:21.717919       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:53.740979       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:01.561467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:59:46.839359       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:03.548840       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [d1d2d3ef1a2d61d604d7b7b71875c31a98127791ebbcaaae9e7c5dcebb1fd036] <==
	I0916 23:49:08.558462       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0916 23:49:08.558692       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0916 23:49:08.559424       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:49:08.560582       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0916 23:49:08.560682       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0916 23:49:08.562044       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:49:08.562105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:49:08.562171       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:49:08.562209       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:49:08.562217       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:49:08.562221       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:49:08.563325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:08.564561       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:49:08.570797       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-069011" podCIDRs=["10.244.0.0/24"]
	I0916 23:49:08.576824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0916 23:49:38.568454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 23:49:38.568633       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0916 23:49:38.568684       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0916 23:49:38.586865       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0916 23:49:38.591210       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0916 23:49:38.668805       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:38.692110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:49:53.514314       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 23:56:52.202912       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0916 23:58:53.764380       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [8204c89cdc90d58370aa745a3053c12e5b976409a1e0bedddf9508ac3e770c1f] <==
	I0916 23:49:09.803647       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:49:09.874911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:49:09.984976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:49:09.985628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:49:09.986296       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:49:10.154642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:49:10.159433       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:49:10.183201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:49:10.195463       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:49:10.195513       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:49:10.199563       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:49:10.199664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:49:10.200188       1 config.go:309] "Starting node config controller"
	I0916 23:49:10.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:49:10.200334       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:49:10.200369       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:49:10.200991       1 config.go:200] "Starting service config controller"
	I0916 23:49:10.201078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:49:10.299859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:49:10.300474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:49:10.300501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:49:10.302086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ecbc264153ff2a219390febac6665f8efc1a49ab24db502b79ba6888e6bd5b71] <==
	E0916 23:49:01.591306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0916 23:49:01.591979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:01.591995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:01.592038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:01.592032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0916 23:49:01.592058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:01.592081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0916 23:49:01.592128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:01.592273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:01.592272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:49:01.592315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0916 23:49:02.478666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0916 23:49:02.478742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0916 23:49:02.495998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:02.533597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:02.645572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:02.658831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0916 23:49:02.700650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:02.730028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:02.731014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0916 23:49:02.807698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:49:02.811032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:02.813063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0916 23:49:02.832467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0916 23:49:05.387364       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:00:07 addons-069011 kubelet[1557]: E0917 00:00:07.523089    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 00:00:07 addons-069011 kubelet[1557]: E0917 00:00:07.523153    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 00:00:07 addons-069011 kubelet[1557]: E0917 00:00:07.523415    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(44795e64-34b3-4492-b6af-9e6353fa4bb4): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:00:07 addons-069011 kubelet[1557]: E0917 00:00:07.523477    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:00:14 addons-069011 kubelet[1557]: E0917 00:00:14.176712    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:00:14 addons-069011 kubelet[1557]: E0917 00:00:14.176800    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-pl9vq" podUID="948400a2-9e11-40dd-af78-237e95b937e2"
	Sep 17 00:00:14 addons-069011 kubelet[1557]: E0917 00:00:14.318038    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067214317760097  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:14 addons-069011 kubelet[1557]: E0917 00:00:14.318080    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067214317760097  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:20 addons-069011 kubelet[1557]: E0917 00:00:20.175331    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="44795e64-34b3-4492-b6af-9e6353fa4bb4"
	Sep 17 00:00:24 addons-069011 kubelet[1557]: E0917 00:00:24.319713    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067224319453488  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:24 addons-069011 kubelet[1557]: E0917 00:00:24.319745    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067224319453488  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:27 addons-069011 kubelet[1557]: E0917 00:00:27.174710    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:00:29 addons-069011 kubelet[1557]: E0917 00:00:29.175135    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-pl9vq" podUID="948400a2-9e11-40dd-af78-237e95b937e2"
	Sep 17 00:00:34 addons-069011 kubelet[1557]: E0917 00:00:34.322111    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067234321820757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:34 addons-069011 kubelet[1557]: E0917 00:00:34.322156    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067234321820757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:37 addons-069011 kubelet[1557]: I0917 00:00:37.174353    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-wtp6g" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:00:37 addons-069011 kubelet[1557]: E0917 00:00:37.960203    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
	Sep 17 00:00:37 addons-069011 kubelet[1557]: E0917 00:00:37.960268    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
	Sep 17 00:00:37 addons-069011 kubelet[1557]: E0917 00:00:37.960494    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container amd-gpu-device-plugin start failed in pod amd-gpu-device-plugin-flfw9_kube-system(b2f08e52-5a20-4c80-bc6c-a073ebe5797b): ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:00:37 addons-069011 kubelet[1557]: E0917 00:00:37.960554    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ErrImagePull: \"reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	Sep 17 00:00:41 addons-069011 kubelet[1557]: E0917 00:00:41.175762    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-pl9vq" podUID="948400a2-9e11-40dd-af78-237e95b937e2"
	Sep 17 00:00:44 addons-069011 kubelet[1557]: E0917 00:00:44.324376    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067244324008012  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:44 addons-069011 kubelet[1557]: E0917 00:00:44.324467    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067244324008012  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:00:50 addons-069011 kubelet[1557]: I0917 00:00:50.173835    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-flfw9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:00:50 addons-069011 kubelet[1557]: E0917 00:00:50.175006    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	
	
	==> storage-provisioner [7d0db99be084d7a7996f085af51ba0b4b9263d1a30c5ba98cac79995b3641b35] <==
	W0917 00:00:25.926921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:27.930823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:27.935574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:29.939911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:29.945542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:31.949208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:31.953456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:33.957906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:33.963984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:35.966973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:35.971071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:37.974633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:37.979461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:39.982754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:39.987967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:41.991699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:41.996849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:44.000539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:44.004481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:46.007871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:46.012294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:48.015855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:48.021463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:50.025549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:00:50.029840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
helpers_test.go:269: (dbg) Run:  kubectl --context addons-069011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 yakd-dashboard-5ff678cb9-pl9vq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-069011 describe pod nginx ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 yakd-dashboard-5ff678cb9-pl9vq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-069011 describe pod nginx ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 yakd-dashboard-5ff678cb9-pl9vq: exit status 1 (78.056115ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Tue, 16 Sep 2025 23:56:47 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kksmh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kksmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m4s                 default-scheduler  Successfully assigned default/nginx to addons-069011
	  Warning  Failed     44s (x2 over 2m18s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     44s (x2 over 2m18s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    31s (x2 over 2m18s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     31s (x2 over 2m18s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    17s (x3 over 4m3s)   kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wj8lw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sp7zb" not found
	Error from server (NotFound): pods "amd-gpu-device-plugin-flfw9" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "registry-66898fdd98-bl4r5" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-pl9vq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-069011 describe pod nginx ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 yakd-dashboard-5ff678cb9-pl9vq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 addons disable yakd --alsologtostderr -v=1: (5.721490263s)
--- FAIL: TestAddons/parallel/Yakd (128.83s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (363.65s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
helpers_test.go:337: TestAddons/parallel/AmdGpuDevicePlugin: WARNING: pod list for "kube-system" "name=amd-gpu-device-plugin" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:1038: ***** TestAddons/parallel/AmdGpuDevicePlugin: pod "name=amd-gpu-device-plugin" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:1038: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
addons_test.go:1038: TestAddons/parallel/AmdGpuDevicePlugin: showing logs for failed pods as of 2025-09-17 00:03:00.470175124 +0000 UTC m=+887.937518705
addons_test.go:1038: (dbg) Run:  kubectl --context addons-069011 describe po amd-gpu-device-plugin-flfw9 -n kube-system
addons_test.go:1038: (dbg) kubectl --context addons-069011 describe po amd-gpu-device-plugin-flfw9 -n kube-system:
Name:                 amd-gpu-device-plugin-flfw9
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      default
Node:                 addons-069011/192.168.49.2
Start Time:           Tue, 16 Sep 2025 23:49:50 +0000
Labels:               controller-revision-hash=7f87d6fd8d
k8s-app=amd-gpu-device-plugin
name=amd-gpu-device-plugin
pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  DaemonSet/amd-gpu-device-plugin
Containers:
amd-gpu-device-plugin:
Container ID:   
Image:          docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/sys from sys (rw)
/var/lib/kubelet/device-plugins from dp (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d8j4w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
dp:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/kubelet/device-plugins
HostPathType:  
sys:
Type:          HostPath (bare host directory volume)
Path:          /sys
HostPathType:  
kube-api-access-d8j4w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/arch=amd64
Tolerations:                 CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type     Reason                           Age                  From               Message
----     ------                           ----                 ----               -------
Normal   Scheduled                        13m                  default-scheduler  Successfully assigned kube-system/amd-gpu-device-plugin-flfw9 to addons-069011
Warning  Failed                           5m27s (x4 over 11m)  kubelet            Error: ErrImagePull
Warning  FailedToRetrieveImagePullSecret  5m1s (x11 over 13m)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
Warning  Failed                           5m1s (x7 over 11m)   kubelet            Error: ImagePullBackOff
Normal   Pulling                          3m59s (x5 over 13m)  kubelet            Pulling image "docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
Warning  Failed                           2m23s (x5 over 11m)  kubelet            Failed to pull image "docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f": reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff                          1s (x22 over 11m)    kubelet            Back-off pulling image "docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f"
addons_test.go:1038: (dbg) Run:  kubectl --context addons-069011 logs amd-gpu-device-plugin-flfw9 -n kube-system
addons_test.go:1038: (dbg) Non-zero exit: kubectl --context addons-069011 logs amd-gpu-device-plugin-flfw9 -n kube-system: exit status 1 (71.302808ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "amd-gpu-device-plugin" in pod "amd-gpu-device-plugin-flfw9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1038: kubectl --context addons-069011 logs amd-gpu-device-plugin-flfw9 -n kube-system: exit status 1
addons_test.go:1039: failed waiting for amd-gpu-device-plugin pod: name=amd-gpu-device-plugin within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-069011
helpers_test.go:243: (dbg) docker inspect addons-069011:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	        "Created": "2025-09-16T23:48:50.029636255Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 523240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-16T23:48:50.075029861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/hosts",
	        "LogPath": "/var/lib/docker/containers/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1/678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1-json.log",
	        "Name": "/addons-069011",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-069011:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-069011",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "678205c9d470560db34d4aa28ded20f2447b4885dcf0ffd1f8ca4178e01790c1",
	                "LowerDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2518cbd808a66bdaad6abcb63b76ad7a400002a59e20fe30d80fbca68923d51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-069011",
	                "Source": "/var/lib/docker/volumes/addons-069011/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-069011",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-069011",
	                "name.minikube.sigs.k8s.io": "addons-069011",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f7ea0b62281ff8981f73b140342aff58601fbb663df7278dfdd6743a41abcca5",
	            "SandboxKey": "/var/run/docker/netns/f7ea0b62281f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-069011": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:4c:3e:1e:87:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d62ec0fa3bfb3ffd62859a508f03996c549db14f34473599ddd1b9022067b7b9",
	                    "EndpointID": "f8f4fe858390c8f96bc24eec26736fad3a3b1ba30f09e93e016a6a79e947f7af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-069011",
	                        "678205c9d470"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-069011 -n addons-069011
helpers_test.go:252: <<< TestAddons/parallel/AmdGpuDevicePlugin FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 logs -n 25: (1.489513866s)
helpers_test.go:260: TestAddons/parallel/AmdGpuDevicePlugin logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ -o=json --download-only -p download-only-515641 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-997829   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-515641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515641   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p download-docker-660125 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p download-docker-660125                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-660125 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ --download-only -p binary-mirror-785971 --alsologtostderr --binary-mirror http://127.0.0.1:38515 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ -p binary-mirror-785971                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-785971   │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ addons  │ enable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ addons  │ disable dashboard -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ start   │ -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:55 UTC │ 16 Sep 25 23:55 UTC │
	│ addons  │ addons-069011 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ enable headlamp -p addons-069011 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-069011                                                                                                                                                                                                                                                                                                                                                                                           │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:56 UTC │
	│ addons  │ addons-069011 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:56 UTC │ 16 Sep 25 23:57 UTC │
	│ addons  │ addons-069011 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ addons  │ addons-069011 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:00 UTC │ 17 Sep 25 00:00 UTC │
	│ addons  │ addons-069011 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ addons  │ addons-069011 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-069011          │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:27.723751  522590 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:27.723864  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.723869  522590 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:27.723873  522590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:27.724066  522590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0916 23:48:27.724618  522590 out.go:368] Setting JSON to false
	I0916 23:48:27.725494  522590 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9051,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:27.725585  522590 start.go:140] virtualization: kvm guest
	I0916 23:48:27.728073  522590 out.go:179] * [addons-069011] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:27.729850  522590 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:48:27.729868  522590 notify.go:220] Checking for updates...
	I0916 23:48:27.733822  522590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:27.736141  522590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:27.738039  522590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:27.740423  522590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:48:27.743368  522590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:48:27.746574  522590 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:27.771724  522590 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:27.771874  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.829971  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.818365984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.830249  522590 docker.go:318] overlay module found
	I0916 23:48:27.832946  522590 out.go:179] * Using the docker driver based on user configuration
	I0916 23:48:27.834751  522590 start.go:304] selected driver: docker
	I0916 23:48:27.834826  522590 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:27.834849  522590 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:48:27.835571  522590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:27.897913  522590 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:27.886229333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:27.898100  522590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:27.898315  522590 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:48:27.900183  522590 out.go:179] * Using Docker driver with root privileges
	I0916 23:48:27.901481  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:27.901597  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:27.901613  522590 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:48:27.901710  522590 start.go:348] cluster config:
	{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0916 23:48:27.903324  522590 out.go:179] * Starting "addons-069011" primary control-plane node in "addons-069011" cluster
	I0916 23:48:27.904623  522590 cache.go:123] Beginning downloading kic base image for docker with crio
	I0916 23:48:27.905841  522590 out.go:179] * Pulling base image v0.0.48 ...
	I0916 23:48:27.907270  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:27.907330  522590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:48:27.907328  522590 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:27.907354  522590 cache.go:58] Caching tarball of preloaded images
	I0916 23:48:27.907495  522590 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 23:48:27.907513  522590 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:48:27.907895  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:27.907924  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json: {Name:mk15dc7feab5fd17bb004b2e5f6ac3bc55ac0d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:27.925199  522590 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:27.925352  522590 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:48:27.925371  522590 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0916 23:48:27.925375  522590 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0916 23:48:27.925383  522590 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0916 23:48:27.925403  522590 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0916 23:48:40.932191  522590 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0916 23:48:40.932224  522590 cache.go:232] Successfully downloaded all kic artifacts
	I0916 23:48:40.932259  522590 start.go:360] acquireMachinesLock for addons-069011: {Name:mk9387b718f452cc25627a84d4c20b7f46084ff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:48:40.932371  522590 start.go:364] duration metric: took 90.542µs to acquireMachinesLock for "addons-069011"
	I0916 23:48:40.932411  522590 start.go:93] Provisioning new machine with config: &{Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:48:40.932527  522590 start.go:125] createHost starting for "" (driver="docker")
	I0916 23:48:40.934531  522590 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0916 23:48:40.934774  522590 start.go:159] libmachine.API.Create for "addons-069011" (driver="docker")
	I0916 23:48:40.934810  522590 client.go:168] LocalClient.Create starting
	I0916 23:48:40.934920  522590 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0916 23:48:41.819608  522590 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0916 23:48:42.094971  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 23:48:42.113173  522590 cli_runner.go:211] docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 23:48:42.113240  522590 network_create.go:284] running [docker network inspect addons-069011] to gather additional debugging logs...
	I0916 23:48:42.113258  522590 cli_runner.go:164] Run: docker network inspect addons-069011
	W0916 23:48:42.130815  522590 cli_runner.go:211] docker network inspect addons-069011 returned with exit code 1
	I0916 23:48:42.130846  522590 network_create.go:287] error running [docker network inspect addons-069011]: docker network inspect addons-069011: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-069011 not found
	I0916 23:48:42.130884  522590 network_create.go:289] output of [docker network inspect addons-069011]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-069011 not found
	
	** /stderr **
	I0916 23:48:42.130990  522590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:42.149832  522590 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002180220}
	I0916 23:48:42.149931  522590 network_create.go:124] attempt to create docker network addons-069011 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 23:48:42.150036  522590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-069011 addons-069011
	I0916 23:48:42.212157  522590 network_create.go:108] docker network addons-069011 192.168.49.0/24 created
	I0916 23:48:42.212194  522590 kic.go:121] calculated static IP "192.168.49.2" for the "addons-069011" container
	I0916 23:48:42.212312  522590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 23:48:42.229867  522590 cli_runner.go:164] Run: docker volume create addons-069011 --label name.minikube.sigs.k8s.io=addons-069011 --label created_by.minikube.sigs.k8s.io=true
	I0916 23:48:42.252846  522590 oci.go:103] Successfully created a docker volume addons-069011
	I0916 23:48:42.252968  522590 cli_runner.go:164] Run: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0916 23:48:45.649491  522590 cli_runner.go:217] Completed: docker run --rm --name addons-069011-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --entrypoint /usr/bin/test -v addons-069011:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (3.39647838s)
	I0916 23:48:45.649523  522590 oci.go:107] Successfully prepared a docker volume addons-069011
	I0916 23:48:45.649558  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:45.649589  522590 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 23:48:45.649695  522590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 23:48:49.956300  522590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-069011:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.306552681s)
	I0916 23:48:49.956343  522590 kic.go:203] duration metric: took 4.306749088s to extract preloaded images to volume ...
	W0916 23:48:49.956477  522590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0916 23:48:49.956523  522590 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0916 23:48:49.956572  522590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 23:48:50.013382  522590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-069011 --name addons-069011 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-069011 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-069011 --network addons-069011 --ip 192.168.49.2 --volume addons-069011:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0916 23:48:50.304600  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Running}}
	I0916 23:48:50.323420  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.342386  522590 cli_runner.go:164] Run: docker exec addons-069011 stat /var/lib/dpkg/alternatives/iptables
	I0916 23:48:50.402276  522590 oci.go:144] the created container "addons-069011" has a running status.
	I0916 23:48:50.402326  522590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa...
	I0916 23:48:50.521235  522590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 23:48:50.553384  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.579068  522590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 23:48:50.579099  522590 kic_runner.go:114] Args: [docker exec --privileged addons-069011 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 23:48:50.638566  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:48:50.659803  522590 machine.go:93] provisionDockerMachine start ...
	I0916 23:48:50.660411  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.680019  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.680310  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.680332  522590 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 23:48:50.820950  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.820990  522590 ubuntu.go:182] provisioning hostname "addons-069011"
	I0916 23:48:50.821063  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:50.841195  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:50.841673  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:50.841710  522590 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-069011 && echo "addons-069011" | sudo tee /etc/hostname
	I0916 23:48:50.996855  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-069011
	
	I0916 23:48:50.996967  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.016407  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.016637  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.016655  522590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-069011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-069011/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-069011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:48:51.154270  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:48:51.154311  522590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0916 23:48:51.154380  522590 ubuntu.go:190] setting up certificates
	I0916 23:48:51.154420  522590 provision.go:84] configureAuth start
	I0916 23:48:51.154487  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:51.173820  522590 provision.go:143] copyHostCerts
	I0916 23:48:51.173904  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0916 23:48:51.174069  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0916 23:48:51.174140  522590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0916 23:48:51.174195  522590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.addons-069011 san=[127.0.0.1 192.168.49.2 addons-069011 localhost minikube]
	I0916 23:48:51.417777  522590 provision.go:177] copyRemoteCerts
	I0916 23:48:51.417839  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:48:51.417897  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.435902  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:51.535686  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0916 23:48:51.563321  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:48:51.590971  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:48:51.617420  522590 provision.go:87] duration metric: took 462.978002ms to configureAuth
	I0916 23:48:51.617461  522590 ubuntu.go:206] setting minikube options for container-runtime
	I0916 23:48:51.617668  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:48:51.617795  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.638144  522590 main.go:141] libmachine: Using SSH client type: native
	I0916 23:48:51.638409  522590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0916 23:48:51.638436  522590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 23:48:51.891077  522590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 23:48:51.891114  522590 machine.go:96] duration metric: took 1.230812219s to provisionDockerMachine
	I0916 23:48:51.891125  522590 client.go:171] duration metric: took 10.956309615s to LocalClient.Create
	I0916 23:48:51.891146  522590 start.go:167] duration metric: took 10.956377105s to libmachine.API.Create "addons-069011"
	I0916 23:48:51.891155  522590 start.go:293] postStartSetup for "addons-069011" (driver="docker")
	I0916 23:48:51.891170  522590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:48:51.891245  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:48:51.891288  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:51.909900  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.010593  522590 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:48:52.014317  522590 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 23:48:52.014357  522590 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 23:48:52.014366  522590 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 23:48:52.014375  522590 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0916 23:48:52.014406  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0916 23:48:52.014479  522590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0916 23:48:52.014515  522590 start.go:296] duration metric: took 123.348567ms for postStartSetup
	I0916 23:48:52.014852  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.034024  522590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/config.json ...
	I0916 23:48:52.034357  522590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 23:48:52.034430  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.053383  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.147697  522590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 23:48:52.152300  522590 start.go:128] duration metric: took 11.219755748s to createHost
	I0916 23:48:52.152322  522590 start.go:83] releasing machines lock for "addons-069011", held for 11.219940729s
	I0916 23:48:52.152383  522590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-069011
	I0916 23:48:52.170897  522590 ssh_runner.go:195] Run: cat /version.json
	I0916 23:48:52.170959  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.170960  522590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:48:52.171033  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:48:52.190054  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.190316  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:48:52.282770  522590 ssh_runner.go:195] Run: systemctl --version
	I0916 23:48:52.358127  522590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 23:48:52.500662  522590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 23:48:52.505640  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.530299  522590 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0916 23:48:52.530413  522590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:48:52.562277  522590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 23:48:52.562302  522590 start.go:495] detecting cgroup driver to use...
	I0916 23:48:52.562333  522590 detect.go:190] detected "systemd" cgroup driver on host os
	I0916 23:48:52.562405  522590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:48:52.578904  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:48:52.592493  522590 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:48:52.592567  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:48:52.607812  522590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:48:52.623718  522590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:48:52.695401  522590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:48:52.772869  522590 docker.go:234] disabling docker service ...
	I0916 23:48:52.772931  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:48:52.793499  522590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:48:52.806446  522590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:48:52.880604  522590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:48:52.994666  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:48:53.008181  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:48:53.026581  522590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0916 23:48:53.026648  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.040463  522590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0916 23:48:53.040546  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.052415  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.063700  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.074445  522590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:48:53.085081  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.097098  522590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.114871  522590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:48:53.125827  522590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:48:53.135170  522590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:48:53.145546  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.253634  522590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 23:48:53.356442  522590 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 23:48:53.356540  522590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 23:48:53.360459  522590 start.go:563] Will wait 60s for crictl version
	I0916 23:48:53.360526  522590 ssh_runner.go:195] Run: which crictl
	I0916 23:48:53.364103  522590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:48:53.402094  522590 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0916 23:48:53.402233  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.441123  522590 ssh_runner.go:195] Run: crio --version
	I0916 23:48:53.481919  522590 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0916 23:48:53.483462  522590 cli_runner.go:164] Run: docker network inspect addons-069011 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 23:48:53.502054  522590 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 23:48:53.506129  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.518646  522590 kubeadm.go:875] updating cluster {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:48:53.518762  522590 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:48:53.518816  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.590933  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.590961  522590 crio.go:433] Images already preloaded, skipping extraction
	I0916 23:48:53.591020  522590 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:48:53.627023  522590 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:48:53.627057  522590 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:48:53.627066  522590 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0916 23:48:53.627155  522590 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-069011 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:48:53.627228  522590 ssh_runner.go:195] Run: crio config
	I0916 23:48:53.674869  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:48:53.674893  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:53.674906  522590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:48:53.674926  522590 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-069011 NodeName:addons-069011 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:48:53.675093  522590 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-069011"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:48:53.675157  522590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:48:53.685496  522590 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:48:53.685568  522590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 23:48:53.695890  522590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0916 23:48:53.715420  522590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:48:53.738183  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0916 23:48:53.758975  522590 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 23:48:53.763002  522590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:48:53.775153  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:48:53.837066  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:48:53.861100  522590 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011 for IP: 192.168.49.2
	I0916 23:48:53.861120  522590 certs.go:194] generating shared ca certs ...
	I0916 23:48:53.861145  522590 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:53.861308  522590 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0916 23:48:54.155814  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt ...
	I0916 23:48:54.155846  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt: {Name:mk009b1713fd08c38e8c6ac054b69276424ded29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156071  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key ...
	I0916 23:48:54.156093  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key: {Name:mk39b68875de7851b17692da85e287f48166d2fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.156213  522590 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0916 23:48:54.291541  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt ...
	I0916 23:48:54.291579  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt: {Name:mk94baf5fb1a8134bb0c9a9f3d32b751fe0bf777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291793  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key ...
	I0916 23:48:54.291817  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key: {Name:mk06b3e70f919971eec12f66023f6279f2a9059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.291928  522590 certs.go:256] generating profile certs ...
	I0916 23:48:54.292014  522590 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key
	I0916 23:48:54.292060  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt with IP's: []
	I0916 23:48:54.529110  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt ...
	I0916 23:48:54.529147  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: {Name:mk9156e00306316f93255eae42ecd81bb5d60b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529374  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key ...
	I0916 23:48:54.529406  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.key: {Name:mk15bd78effcf8815d5571a84284c31db31b997e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.529525  522590 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd
	I0916 23:48:54.529556  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 23:48:54.601370  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd ...
	I0916 23:48:54.601415  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd: {Name:mkb42f86b810cddd05c27083cd910769800b1942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.602548  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd ...
	I0916 23:48:54.602578  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd: {Name:mkf41ec91a0589b4d908c830ee946e4604a6886c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.603343  522590 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt
	I0916 23:48:54.603493  522590 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key.86e487dd -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key
	I0916 23:48:54.603577  522590 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key
	I0916 23:48:54.603602  522590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt with IP's: []
	I0916 23:48:54.685718  522590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt ...
	I0916 23:48:54.685751  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt: {Name:mk4c4f7fbd326f3d00c11caa86441b715a5844e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.686777  522590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key ...
	I0916 23:48:54.686809  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key: {Name:mkde64e1b9ef5bdc16ad6f2b11b391d65f689b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:54.687062  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:48:54.687107  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0916 23:48:54.687130  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:48:54.687161  522590 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0916 23:48:54.687932  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:48:54.717259  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:48:54.744669  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:48:54.771438  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 23:48:54.799454  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 23:48:54.826220  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 23:48:54.853243  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:48:54.878912  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 23:48:54.905711  522590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:48:54.935757  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:48:54.956698  522590 ssh_runner.go:195] Run: openssl version
	I0916 23:48:54.962817  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:48:54.976805  522590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.980979  522590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.981051  522590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:48:54.988637  522590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:48:55.000379  522590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:48:55.004385  522590 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:48:55.004456  522590 kubeadm.go:392] StartCluster: {Name:addons-069011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-069011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:48:55.004547  522590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 23:48:55.004599  522590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:48:55.043443  522590 cri.go:89] found id: ""
	I0916 23:48:55.043525  522590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:48:55.053975  522590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:48:55.064119  522590 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 23:48:55.064186  522590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:48:55.074381  522590 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:48:55.074421  522590 kubeadm.go:157] found existing configuration files:
	
	I0916 23:48:55.074469  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:48:55.084667  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:48:55.084749  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:48:55.095859  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:48:55.106006  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:48:55.106068  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:48:55.115485  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.124880  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:48:55.124952  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:48:55.134292  522590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:48:55.144662  522590 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:48:55.144725  522590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:48:55.154111  522590 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 23:48:55.211692  522590 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0916 23:48:55.271378  522590 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:49:04.949743  522590 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:49:04.949820  522590 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:49:04.949928  522590 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 23:49:04.950016  522590 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0916 23:49:04.950100  522590 kubeadm.go:310] OS: Linux
	I0916 23:49:04.950168  522590 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 23:49:04.950250  522590 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 23:49:04.950311  522590 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 23:49:04.950355  522590 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 23:49:04.950436  522590 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 23:49:04.950511  522590 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 23:49:04.950590  522590 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 23:49:04.950659  522590 kubeadm.go:310] CGROUPS_IO: enabled
	I0916 23:49:04.950779  522590 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:49:04.950896  522590 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:49:04.950988  522590 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:49:04.951039  522590 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:49:04.953148  522590 out.go:252]   - Generating certificates and keys ...
	I0916 23:49:04.953253  522590 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:49:04.953350  522590 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:49:04.953473  522590 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:49:04.953544  522590 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:49:04.953598  522590 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:49:04.953656  522590 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:49:04.953723  522590 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:49:04.953871  522590 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.953944  522590 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:49:04.954104  522590 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-069011 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 23:49:04.954204  522590 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:49:04.954308  522590 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:49:04.954373  522590 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:49:04.954472  522590 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:49:04.954529  522590 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:49:04.954641  522590 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:49:04.954719  522590 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:49:04.954827  522590 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:49:04.954889  522590 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:49:04.954961  522590 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:49:04.955029  522590 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:49:04.956667  522590 out.go:252]   - Booting up control plane ...
	I0916 23:49:04.956807  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:49:04.956925  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:49:04.956985  522590 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:49:04.957219  522590 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:49:04.957368  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:49:04.957516  522590 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:49:04.957633  522590 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:49:04.957703  522590 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:49:04.957908  522590 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:49:04.958044  522590 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:49:04.958151  522590 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.203651ms
	I0916 23:49:04.958278  522590 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:49:04.958374  522590 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0916 23:49:04.958531  522590 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:49:04.958637  522590 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:49:04.958758  522590 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.870805967s
	I0916 23:49:04.958876  522590 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.059203573s
	I0916 23:49:04.958980  522590 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002212231s
	I0916 23:49:04.959143  522590 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:49:04.959322  522590 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:49:04.959464  522590 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:49:04.959729  522590 kubeadm.go:310] [mark-control-plane] Marking the node addons-069011 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:49:04.959828  522590 kubeadm.go:310] [bootstrap-token] Using token: hth27u.vwd374r3m591cy8w
	I0916 23:49:04.961508  522590 out.go:252]   - Configuring RBAC rules ...
	I0916 23:49:04.961663  522590 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:49:04.961761  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:49:04.961918  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:49:04.962103  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:49:04.962249  522590 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:49:04.962324  522590 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:49:04.962449  522590 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:49:04.962510  522590 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:49:04.962584  522590 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:49:04.962595  522590 kubeadm.go:310] 
	I0916 23:49:04.962677  522590 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:49:04.962687  522590 kubeadm.go:310] 
	I0916 23:49:04.962800  522590 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:49:04.962816  522590 kubeadm.go:310] 
	I0916 23:49:04.962858  522590 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:49:04.962957  522590 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:49:04.963031  522590 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:49:04.963041  522590 kubeadm.go:310] 
	I0916 23:49:04.963139  522590 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:49:04.963150  522590 kubeadm.go:310] 
	I0916 23:49:04.963217  522590 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:49:04.963226  522590 kubeadm.go:310] 
	I0916 23:49:04.963305  522590 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:49:04.963432  522590 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:49:04.963527  522590 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:49:04.963541  522590 kubeadm.go:310] 
	I0916 23:49:04.963668  522590 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:49:04.963778  522590 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:49:04.963792  522590 kubeadm.go:310] 
	I0916 23:49:04.963908  522590 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964060  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0916 23:49:04.964108  522590 kubeadm.go:310] 	--control-plane 
	I0916 23:49:04.964118  522590 kubeadm.go:310] 
	I0916 23:49:04.964224  522590 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:49:04.964234  522590 kubeadm.go:310] 
	I0916 23:49:04.964354  522590 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hth27u.vwd374r3m591cy8w \
	I0916 23:49:04.964531  522590 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0916 23:49:04.964546  522590 cni.go:84] Creating CNI manager for ""
	I0916 23:49:04.964565  522590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:49:04.966440  522590 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0916 23:49:04.968135  522590 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 23:49:04.972876  522590 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0916 23:49:04.972901  522590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 23:49:04.992864  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 23:49:05.238639  522590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:49:05.238825  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.238851  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-069011 minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-069011 minikube.k8s.io/primary=true
	I0916 23:49:05.248222  522590 ops.go:34] apiserver oom_adj: -16
	I0916 23:49:05.324340  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:05.825316  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.324537  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:06.824724  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.325050  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:07.824729  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.325083  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:08.824525  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.324551  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.825331  522590 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:49:09.895926  522590 kubeadm.go:1105] duration metric: took 4.65716259s to wait for elevateKubeSystemPrivileges
	I0916 23:49:09.895964  522590 kubeadm.go:394] duration metric: took 14.891511977s to StartCluster
	I0916 23:49:09.895989  522590 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896108  522590 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:49:09.896612  522590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:49:09.896807  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:49:09.896820  522590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:49:09.896883  522590 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 23:49:09.897046  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.897061  522590 addons.go:69] Setting volcano=true in profile "addons-069011"
	I0916 23:49:09.897068  522590 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897082  522590 addons.go:238] Setting addon volcano=true in "addons-069011"
	I0916 23:49:09.897052  522590 addons.go:69] Setting yakd=true in profile "addons-069011"
	I0916 23:49:09.897090  522590 addons.go:69] Setting registry-creds=true in profile "addons-069011"
	I0916 23:49:09.897102  522590 addons.go:238] Setting addon yakd=true in "addons-069011"
	I0916 23:49:09.897112  522590 addons.go:238] Setting addon registry-creds=true in "addons-069011"
	I0916 23:49:09.897122  522590 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-069011"
	I0916 23:49:09.897128  522590 addons.go:69] Setting storage-provisioner=true in profile "addons-069011"
	I0916 23:49:09.897146  522590 addons.go:69] Setting volumesnapshots=true in profile "addons-069011"
	I0916 23:49:09.897161  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897169  522590 addons.go:69] Setting metrics-server=true in profile "addons-069011"
	I0916 23:49:09.897176  522590 addons.go:69] Setting cloud-spanner=true in profile "addons-069011"
	I0916 23:49:09.897178  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897047  522590 addons.go:69] Setting inspektor-gadget=true in profile "addons-069011"
	I0916 23:49:09.897165  522590 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-069011"
	I0916 23:49:09.897206  522590 addons.go:238] Setting addon cloud-spanner=true in "addons-069011"
	I0916 23:49:09.897216  522590 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-069011"
	I0916 23:49:09.897232  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897233  522590 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-069011"
	I0916 23:49:09.897264  522590 addons.go:238] Setting addon inspektor-gadget=true in "addons-069011"
	I0916 23:49:09.897181  522590 addons.go:238] Setting addon metrics-server=true in "addons-069011"
	I0916 23:49:09.897423  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897445  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897164  522590 addons.go:238] Setting addon volumesnapshots=true in "addons-069011"
	I0916 23:49:09.897586  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897092  522590 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-069011"
	I0916 23:49:09.897619  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-069011"
	I0916 23:49:09.897820  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897823  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897828  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897883  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897925  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897931  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.898010  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897153  522590 addons.go:238] Setting addon storage-provisioner=true in "addons-069011"
	I0916 23:49:09.898348  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897270  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897123  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.898989  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.899031  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.897162  522590 addons.go:69] Setting registry=true in profile "addons-069011"
	I0916 23:49:09.899114  522590 addons.go:238] Setting addon registry=true in "addons-069011"
	I0916 23:49:09.899147  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897135  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897171  522590 addons.go:69] Setting default-storageclass=true in profile "addons-069011"
	I0916 23:49:09.899508  522590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-069011"
	I0916 23:49:09.897278  522590 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:09.899697  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897286  522590 addons.go:69] Setting ingress=true in profile "addons-069011"
	I0916 23:49:09.899882  522590 addons.go:238] Setting addon ingress=true in "addons-069011"
	I0916 23:49:09.899918  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.897295  522590 addons.go:69] Setting gcp-auth=true in profile "addons-069011"
	I0916 23:49:09.899976  522590 mustload.go:65] Loading cluster: addons-069011
	I0916 23:49:09.897305  522590 addons.go:69] Setting ingress-dns=true in profile "addons-069011"
	I0916 23:49:09.900142  522590 addons.go:238] Setting addon ingress-dns=true in "addons-069011"
	I0916 23:49:09.900176  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.900346  522590 out.go:179] * Verifying Kubernetes components...
	I0916 23:49:09.902141  522590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:49:09.906029  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906489  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906586  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906921  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.907068  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909270  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.909876  522590 config.go:182] Loaded profile config "addons-069011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:49:09.910613  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.906032  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:09.966036  522590 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-069011"
	I0916 23:49:09.966110  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:09.966784  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	W0916 23:49:09.981981  522590 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 23:49:09.986930  522590 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0916 23:49:09.989771  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 23:49:09.989801  522590 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 23:49:09.989878  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.990151  522590 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0916 23:49:09.991871  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 23:49:09.992484  522590 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0916 23:49:09.993934  522590 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:09.993954  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 23:49:09.994025  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.994418  522590 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:09.994431  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0916 23:49:09.994485  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:09.997452  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 23:49:09.997452  522590 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0916 23:49:10.001152  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 23:49:10.001192  522590 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.001229  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0916 23:49:10.001311  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.003359  522590 addons.go:238] Setting addon default-storageclass=true in "addons-069011"
	I0916 23:49:10.003429  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.003879  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:10.004609  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 23:49:10.006166  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 23:49:10.007322  522590 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0916 23:49:10.008643  522590 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 23:49:10.008663  522590 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.008684  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 23:49:10.008820  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 23:49:10.008829  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.010190  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 23:49:10.010220  522590 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 23:49:10.010294  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.012486  522590 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:49:10.012564  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 23:49:10.014826  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.014910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:49:10.015167  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.016771  522590 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0916 23:49:10.018372  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.018418  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0916 23:49:10.018493  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 23:49:10.018494  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.019739  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 23:49:10.019764  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 23:49:10.019840  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.023104  522590 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0916 23:49:10.023240  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0916 23:49:10.024340  522590 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 23:49:10.024365  522590 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0916 23:49:10.024441  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.025784  522590 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0916 23:49:10.025900  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.027422  522590 out.go:179]   - Using image docker.io/registry:3.0.0
	I0916 23:49:10.029503  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:10.032360  522590 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 23:49:10.032382  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 23:49:10.032451  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.032643  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:10.037094  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.038113  522590 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.038152  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 23:49:10.038221  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.058927  522590 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.058950  522590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:49:10.059009  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.063705  522590 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 23:49:10.066747  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 23:49:10.066781  522590 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 23:49:10.066937  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.067231  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.069660  522590 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 23:49:10.072852  522590 out.go:179]   - Using image docker.io/busybox:stable
	I0916 23:49:10.077706  522590 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.077738  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 23:49:10.077812  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:10.081171  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099594  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.099601  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.101679  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.103303  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.109277  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.113014  522590 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:49:10.114406  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.114692  522590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:49:10.116962  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.132677  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.135654  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.137795  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.144377  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.149192  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:10.245816  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 23:49:10.245838  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 23:49:10.253803  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:49:10.256108  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:49:10.265944  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:49:10.288794  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 23:49:10.288827  522590 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 23:49:10.291276  522590 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.291301  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0916 23:49:10.298027  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:49:10.301761  522590 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 23:49:10.301815  522590 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 23:49:10.303881  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 23:49:10.303906  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 23:49:10.307619  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:49:10.321011  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:49:10.321513  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 23:49:10.321533  522590 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 23:49:10.335228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:49:10.342628  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:49:10.353105  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 23:49:10.360830  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 23:49:10.360864  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 23:49:10.366097  522590 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.366124  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 23:49:10.368966  522590 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.368997  522590 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 23:49:10.374870  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 23:49:10.374897  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 23:49:10.383228  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:10.419473  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 23:49:10.419505  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 23:49:10.420148  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 23:49:10.420173  522590 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 23:49:10.431466  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 23:49:10.431495  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 23:49:10.431508  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:49:10.447520  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:49:10.491601  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 23:49:10.491635  522590 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 23:49:10.495666  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 23:49:10.495699  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 23:49:10.522266  522590 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 23:49:10.522304  522590 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 23:49:10.608119  522590 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 23:49:10.610081  522590 node_ready.go:35] waiting up to 6m0s for node "addons-069011" to be "Ready" ...
	I0916 23:49:10.613978  522590 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.614095  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 23:49:10.619888  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 23:49:10.619918  522590 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 23:49:10.636272  522590 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 23:49:10.636303  522590 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 23:49:10.689230  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:49:10.705272  522590 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.705297  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 23:49:10.708368  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 23:49:10.708557  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 23:49:10.788275  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 23:49:10.788306  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 23:49:10.806501  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:49:10.869607  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 23:49:10.869632  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 23:49:10.937889  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 23:49:10.937914  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 23:49:11.002071  522590 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.002102  522590 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 23:49:11.047895  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:49:11.130142  522590 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-069011" context rescaled to 1 replicas
	I0916 23:49:11.643350  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.290178117s)
	I0916 23:49:11.643439  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.30078278s)
	I0916 23:49:11.643452  522590 addons.go:479] Verifying addon ingress=true in "addons-069011"
	I0916 23:49:11.643582  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.212051777s)
	I0916 23:49:11.643613  522590 addons.go:479] Verifying addon registry=true in "addons-069011"
	I0916 23:49:11.643522  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.260251451s)
	I0916 23:49:11.643722  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.196160875s)
	W0916 23:49:11.643735  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.643740  522590 addons.go:479] Verifying addon metrics-server=true in "addons-069011"
	I0916 23:49:11.643761  522590 retry.go:31] will retry after 298.602868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:11.646501  522590 out.go:179] * Verifying registry addon...
	I0916 23:49:11.646501  522590 out.go:179] * Verifying ingress addon...
	I0916 23:49:11.646504  522590 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-069011 service yakd-dashboard -n yakd-dashboard
	
	I0916 23:49:11.652191  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 23:49:11.652206  522590 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 23:49:11.655147  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:11.655173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:11.655271  522590 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 23:49:11.655299  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:11.943533  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:12.143203  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.336408881s)
	W0916 23:49:12.143280  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143297  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.095362374s)
	I0916 23:49:12.143318  522590 retry.go:31] will retry after 271.042655ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:49:12.143322  522590 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-069011"
	I0916 23:49:12.145833  522590 out.go:179] * Verifying csi-hostpath-driver addon...
	I0916 23:49:12.148236  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 23:49:12.153014  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:12.153041  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.157053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.157321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.415287  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0916 23:49:12.575627  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:12.575662  522590 retry.go:31] will retry after 298.950278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:12.614105  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:12.652906  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:12.655120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:12.655721  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:12.875699  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:13.152262  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.155946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.156155  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:13.653200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:13.655268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:13.655558  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.152741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.154674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.154869  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.651414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:14.654802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:14.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:14.929904  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.51454475s)
	I0916 23:49:14.929925  522590 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.05417803s)
	W0916 23:49:14.929968  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:14.929993  522590 retry.go:31] will retry after 724.402782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:15.113335  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:15.152058  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.155353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.155409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:15.651139  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:15.655103  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:15.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:15.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.152053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.155268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.155481  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:16.233482  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.233517  522590 retry.go:31] will retry after 528.645422ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:16.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:16.654976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:16.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:16.763126  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:17.152861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.155374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:17.346237  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:17.346292  522590 retry.go:31] will retry after 1.241721728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:17.613291  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:17.637138  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 23:49:17.637240  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.651912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:17.655594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:17.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:17.659459  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.770859  522590 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 23:49:17.790444  522590 addons.go:238] Setting addon gcp-auth=true in "addons-069011"
	I0916 23:49:17.790517  522590 host.go:66] Checking if "addons-069011" exists ...
	I0916 23:49:17.790880  522590 cli_runner.go:164] Run: docker container inspect addons-069011 --format={{.State.Status}}
	I0916 23:49:17.810255  522590 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 23:49:17.810334  522590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-069011
	I0916 23:49:17.829504  522590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/addons-069011/id_rsa Username:docker}
	I0916 23:49:17.924366  522590 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:49:17.925772  522590 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0916 23:49:17.926989  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 23:49:17.927016  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 23:49:17.947928  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 23:49:17.947963  522590 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 23:49:17.968887  522590 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:17.968910  522590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 23:49:17.988471  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:49:18.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.155501  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.155799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.360333  522590 addons.go:479] Verifying addon gcp-auth=true in "addons-069011"
	I0916 23:49:18.361695  522590 out.go:179] * Verifying gcp-auth addon...
	I0916 23:49:18.364169  522590 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 23:49:18.367024  522590 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 23:49:18.367044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:18.588324  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:18.652355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:18.654775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:18.655329  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:18.867741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:19.151755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.154903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.154930  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:19.161345  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.161383  522590 retry.go:31] will retry after 2.165570319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:19.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:19.614026  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:19.652152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:19.655765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:19.655827  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:19.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.151387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.154666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.154897  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.368600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:20.651411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:20.655000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:20.655011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:20.868027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.151730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.155244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.155464  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.327698  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:21.367411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:21.650905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:21.655659  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:21.655769  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:21.867968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:21.902069  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:21.902100  522590 retry.go:31] will retry after 1.920767743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:22.113269  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:22.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.154840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:22.651563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:22.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:22.655020  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:22.868412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.151599  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.155033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.155245  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.367616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:23.651422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:23.654714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:23.654854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:23.823078  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:23.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.113772  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:24.152012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.155306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.155536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.367843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:24.396574  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.396608  522590 retry.go:31] will retry after 5.249600328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:24.651892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:24.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:24.655528  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:24.868048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.152228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.155056  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.368598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:25.651661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:25.655231  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:25.655269  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:25.867507  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:26.151287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.155745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.155923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.368083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:26.612894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:26.652086  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:26.655386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:26.655500  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:26.867894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.151727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.155077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.368077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:27.652080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:27.655544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:27.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:27.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:28.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.155039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.155194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.367271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:28.613247  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:28.652605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:28.654553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:28.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:28.868444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.151120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.155325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.155404  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.367903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:29.646635  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:29.651947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:29.655369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:29.655591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:29.868090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.151994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:30.222879  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.222909  522590 retry.go:31] will retry after 6.679975361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:30.368039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:30.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:30.655141  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:30.655354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:30.867036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:31.112894  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:31.151818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.155291  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.367578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:31.651196  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:31.655723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:31.655764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:31.867818  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.152173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.155965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.156115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.367078  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:32.652733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:32.655287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:32.655347  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:32.867604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:33.113866  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:33.151850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.155462  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.367548  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:33.651173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:33.655487  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:33.655550  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:33.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.151692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.154752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.154822  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.367980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:34.652127  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:34.655730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:34.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:34.868271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:35.151839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.155765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.155925  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.368376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:35.613366  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:35.651791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:35.655929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:35.656002  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:35.868276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.152007  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.155379  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.367593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.652140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:36.655627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:36.655826  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:36.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:36.903759  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:37.152322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.155245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.367621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:37.484516  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:37.484552  522590 retry.go:31] will retry after 4.853736845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:49:37.613755  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:37.651588  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:37.654987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:37.655126  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:37.867377  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.154847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.155074  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:38.651724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:38.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:38.655174  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:38.867641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:39.151291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:39.613957  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:39.652056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:39.655324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:39.655427  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:39.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.151889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.155213  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.155515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.367629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:40.652268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:40.655504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:40.655716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:40.867786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.151908  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.155026  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.155219  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.367009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:41.652274  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:41.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:41.654993  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:41.868497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.113784  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:42.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.156178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.156253  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.339312  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:42.368085  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:42.653863  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:42.656534  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:42.656609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:42.867016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:42.931965  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:42.932013  522590 retry.go:31] will retry after 9.201032876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:43.151738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.155452  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.157165  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.367931  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:43.651921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:43.655792  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:43.655791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:43.868283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:44.151192  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.155952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.156077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.368187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:44.612897  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:44.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:44.655165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:44.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:44.867416  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.155365  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.155527  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.367088  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:45.652905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:45.655224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:45.655382  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:45.867470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:46.152562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.155553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.155698  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.367899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:46.613967  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:46.652183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:46.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:46.655685  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:46.867721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.155062  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.155242  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.367292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:47.652156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:47.655812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:47.656147  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:47.867423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.152152  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.155526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.155678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.367871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:48.651966  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:48.655104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:48.655456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:48.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:49:49.113864  522590 node_ready.go:57] node "addons-069011" has "Ready":"False" status (will retry)
	I0916 23:49:49.151422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.154601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.154659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.368059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:49.651895  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:49.655081  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:49.655227  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:49.867193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.151407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.154433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.154532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.367752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.614048  522590 node_ready.go:49] node "addons-069011" is "Ready"
	I0916 23:49:50.614124  522590 node_ready.go:38] duration metric: took 40.004018622s for node "addons-069011" to be "Ready" ...
	I0916 23:49:50.614142  522590 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:49:50.614260  522590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:49:50.634002  522590 api_server.go:72] duration metric: took 40.737149121s to wait for apiserver process to appear ...
	I0916 23:49:50.634037  522590 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:49:50.634066  522590 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 23:49:50.639530  522590 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 23:49:50.640709  522590 api_server.go:141] control plane version: v1.34.0
	I0916 23:49:50.640743  522590 api_server.go:131] duration metric: took 6.69752ms to wait for apiserver health ...
	I0916 23:49:50.640754  522590 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:49:50.645035  522590 system_pods.go:59] 20 kube-system pods found
	I0916 23:49:50.645109  522590 system_pods.go:61] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.645119  522590 system_pods.go:61] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.645126  522590 system_pods.go:61] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.645131  522590 system_pods.go:61] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.645134  522590 system_pods.go:61] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.645138  522590 system_pods.go:61] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.645141  522590 system_pods.go:61] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.645146  522590 system_pods.go:61] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.645150  522590 system_pods.go:61] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.645156  522590 system_pods.go:61] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.645165  522590 system_pods.go:61] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.645171  522590 system_pods.go:61] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.645182  522590 system_pods.go:61] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.645192  522590 system_pods.go:61] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.645206  522590 system_pods.go:61] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.645211  522590 system_pods.go:61] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.645217  522590 system_pods.go:61] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.645222  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645231  522590 system_pods.go:61] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.645238  522590 system_pods.go:61] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.645253  522590 system_pods.go:74] duration metric: took 4.491675ms to wait for pod list to return data ...
	I0916 23:49:50.645267  522590 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:49:50.649832  522590 default_sa.go:45] found service account: "default"
	I0916 23:49:50.649863  522590 default_sa.go:55] duration metric: took 4.587184ms for default service account to be created ...
	I0916 23:49:50.649876  522590 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:49:50.651240  522590 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:49:50.651263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:50.653416  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.653453  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.653463  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.653471  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending
	I0916 23:49:50.653478  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending
	I0916 23:49:50.653507  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending
	I0916 23:49:50.653517  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.653523  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.653531  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.653541  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.653553  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.653564  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.653570  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.653577  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.653586  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.653604  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.653610  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending
	I0916 23:49:50.653621  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.653630  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653641  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.653649  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.653671  522590 retry.go:31] will retry after 286.454663ms: missing components: kube-dns
	I0916 23:49:50.654669  522590 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:49:50.654689  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:50.655263  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:50.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:50.970963  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:50.971008  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:50.971021  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:50.971032  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:50.971040  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:50.971049  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:50.971060  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:50.971067  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:50.971075  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:50.971081  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:50.971093  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:50.971098  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:50.971107  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:50.971115  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:50.971127  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:50.971139  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:50.971149  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:50.971487  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:50.971519  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971529  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:50.971537  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:50.971560  522590 retry.go:31] will retry after 250.710433ms: missing components: kube-dns
	I0916 23:49:51.152661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.154830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.154922  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.227146  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.227184  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.227191  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:49:51.227200  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.227206  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.227213  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.227219  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.227223  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.227226  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.227230  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.227235  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.227241  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.227244  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.227250  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.227256  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.227261  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.227265  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.227272  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.227277  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227286  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.227292  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:49:51.227310  522590 retry.go:31] will retry after 293.334556ms: missing components: kube-dns
	I0916 23:49:51.368304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:51.526481  522590 system_pods.go:86] 20 kube-system pods found
	I0916 23:49:51.526535  522590 system_pods.go:89] "amd-gpu-device-plugin-flfw9" [b2f08e52-5a20-4c80-bc6c-a073ebe5797b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:49:51.526545  522590 system_pods.go:89] "coredns-66bc5c9577-m872b" [71d1129f-0b38-4fd0-aa94-2216f817db05] Running
	I0916 23:49:51.526559  522590 system_pods.go:89] "csi-hostpath-attacher-0" [c59ae278-316e-42e6-883c-d1bf3dcac831] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 23:49:51.526572  522590 system_pods.go:89] "csi-hostpath-resizer-0" [b6811a1c-ec65-41d4-b637-3dba433103a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 23:49:51.526582  522590 system_pods.go:89] "csi-hostpathplugin-s98vb" [8fab673f-39bf-4b73-8168-0a4b14363105] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 23:49:51.526589  522590 system_pods.go:89] "etcd-addons-069011" [69ebe6a0-299e-49e5-8218-fdac355c5f45] Running
	I0916 23:49:51.526595  522590 system_pods.go:89] "kindnet-hn7tx" [cb5fada4-bc37-494a-be0d-b2fd7f39560e] Running
	I0916 23:49:51.526601  522590 system_pods.go:89] "kube-apiserver-addons-069011" [4b5f12ce-0594-4279-8153-21e81bc3f16c] Running
	I0916 23:49:51.526608  522590 system_pods.go:89] "kube-controller-manager-addons-069011" [fc179e5f-6cd8-4dfc-b1dc-69acfeef857b] Running
	I0916 23:49:51.526618  522590 system_pods.go:89] "kube-ingress-dns-minikube" [3ebf3aba-8898-42b1-a92e-3bc50dd56aab] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:49:51.526623  522590 system_pods.go:89] "kube-proxy-v85kq" [4f75720a-ff81-4686-9e02-38105efce58a] Running
	I0916 23:49:51.526629  522590 system_pods.go:89] "kube-scheduler-addons-069011" [28fecee5-eca9-4722-85d9-2b6ba07ad5c1] Running
	I0916 23:49:51.526635  522590 system_pods.go:89] "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:49:51.526645  522590 system_pods.go:89] "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:49:51.526690  522590 system_pods.go:89] "registry-66898fdd98-bl4r5" [34782a61-58ac-458e-ab2f-7a22bac44c65] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:49:51.526699  522590 system_pods.go:89] "registry-creds-764b6fb674-2s5b5" [5888781f-e41a-4936-b640-e0d9428b7522] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:49:51.526714  522590 system_pods.go:89] "registry-proxy-gtpv9" [65985cef-0aef-4a2d-8362-f2412f19f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:49:51.526722  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s7m82" [100900c8-3969-4728-9976-e2aa3a810064] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526731  522590 system_pods.go:89] "snapshot-controller-7d9fbc56b8-st98r" [3bcc527a-ffe8-4b57-a90c-e0ab34894d2c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:49:51.526737  522590 system_pods.go:89] "storage-provisioner" [f46384d9-dda0-4459-8771-9899ad79866e] Running
	I0916 23:49:51.526755  522590 system_pods.go:126] duration metric: took 876.872082ms to wait for k8s-apps to be running ...
	I0916 23:49:51.526767  522590 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:49:51.526834  522590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:49:51.543571  522590 system_svc.go:56] duration metric: took 16.790922ms WaitForService to wait for kubelet
	I0916 23:49:51.543604  522590 kubeadm.go:578] duration metric: took 41.646760707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:49:51.543633  522590 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:49:51.546804  522590 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 23:49:51.546832  522590 node_conditions.go:123] node cpu capacity is 8
	I0916 23:49:51.546851  522590 node_conditions.go:105] duration metric: took 3.210939ms to run NodePressure ...
	I0916 23:49:51.546866  522590 start.go:241] waiting for startup goroutines ...
	I0916 23:49:51.653201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:51.655460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:51.655502  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:51.867905  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.133215  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:49:52.152421  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.155318  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:52.367901  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:52.651612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:52.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:52.655874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0916 23:49:52.780604  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.780644  522590 retry.go:31] will retry after 11.236841486s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:49:52.867960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.152499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.155690  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.369120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:53.653294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:53.655366  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:53.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:53.867612  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.152263  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.154786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.154825  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.368535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:54.651809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:54.655532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:54.655654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:54.868318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.152216  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.154997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.155198  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.368885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:55.652607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:55.654882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:55.868072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.153735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.155961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.156369  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.367288  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:56.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:56.654554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:56.654654  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:56.867827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.152232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.154799  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.154814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.368344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:57.651690  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:57.655166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:57.655327  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:57.867912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.152149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.155593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.155720  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.367868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:58.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:58.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:58.654817  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:58.867989  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.154848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.154899  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.368414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:49:59.651849  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:49:59.655048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:49:59.655193  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:49:59.866961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.152429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.154913  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.154932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.367821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:00.652008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:00.655477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:00.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:00.867460  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.152318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.155248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.155323  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:01.651746  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:01.655519  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:01.655601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:01.867766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.152212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.154600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.154831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.367336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:02.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:02.655315  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:02.655331  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:02.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.152281  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.154749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.154818  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.368215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:03.651319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:03.655739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:03.655966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:03.868159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:04.018435  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:04.151970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.155986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.156204  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.367594  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:04.598781  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.598815  522590 retry.go:31] will retry after 23.829016694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:04.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:04.655382  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:04.655518  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:04.867585  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.151943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.155490  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.367838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:05.652819  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:05.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:05.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:05.868265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.151902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.155241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.155278  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.367335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:06.651933  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:06.655376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:06.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:06.867544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.151927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.155463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.155566  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.367946  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:07.652554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:07.655150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:07.655250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:07.867104  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.154867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.154932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.367820  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:08.652108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:08.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:08.655674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:08.867488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.151318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.155660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.155771  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.368018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:09.652352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:09.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:09.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:09.867979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.154744  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.367888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:10.652342  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:10.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:10.655052  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:10.868023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.152284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.154741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.154823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.368224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:11.651602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:11.654730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:11.655430  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:11.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.152453  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.155233  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.367898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:12.652236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:12.654831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:12.654839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:12.868375  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.151282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.155678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.155786  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.368346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:13.652132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:13.655641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:13.655658  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:13.867735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.152048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.155624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.367645  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:14.651952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:14.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:14.655433  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:14.867300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.151804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.155275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.155321  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.367103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:15.651754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:15.655590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:15.655740  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:15.868629  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.155556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.155585  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.367279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:16.651583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:16.655042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:16.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:16.867499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.151753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.154889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.368258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:17.651448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:17.655920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:17.655988  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:17.868165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.151576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.155157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.368301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:18.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:18.654851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:18.655022  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:18.868093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.154885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.154951  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.368636  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:19.651987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:19.655509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:19.655549  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:19.867433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.154985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.155048  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.368109  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:20.651638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:20.654894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:20.654923  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:20.867870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.152292  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.155357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.155505  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.368035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:21.652897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:21.656101  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:21.656100  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:21.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.152943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.155198  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.367576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:22.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:22.655810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:22.655870  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:22.867990  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.152723  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.155609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.155624  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:23.653531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:23.655283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:23.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:23.867298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.151888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.155832  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.155956  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.373346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:24.652179  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:24.655942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:24.656079  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:24.867787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.152745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.156266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.156485  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.367952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:25.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:25.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:25.655819  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:25.867860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.153299  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.155510  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.155645  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.367671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:26.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:26.655448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:26.655652  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:26.867254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.151981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.156009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.156850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.367744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:27.654351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:27.656634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:27.656737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:27.868098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.153435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.156745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.156944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.367835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:28.428940  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:50:28.651949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:28.655492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:28.655714  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:28.866833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:50:29.128531  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.128569  522590 retry.go:31] will retry after 40.39789771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:50:29.154066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.156666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.156872  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.367799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:29.652238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:29.654780  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:29.655095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:29.867922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.152458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.155006  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.155093  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.367812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:30.652850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:30.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:30.655439  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:30.867340  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.151917  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.155386  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.155417  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.367531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:31.653268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:31.657791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:31.657831  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:31.868270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.155469  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.157902  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.158614  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.368334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:32.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:32.656126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:32.656171  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:32.867579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.155033  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.156187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.366965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:33.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:33.655162  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:33.655350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:33.868673  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.152675  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.155008  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.155063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.368239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:34.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:34.655025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:34.655185  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:34.867899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.152626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.155359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.155446  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.367305  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:35.652378  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:35.655807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:35.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:35.868004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.152291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.155228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.155274  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.367904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:36.652666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:36.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:36.655056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:36.868245  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.153660  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.155936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.156021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.367947  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:37.652965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:37.654916  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:37.654970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:37.867352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.152079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.155581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.155593  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:38.652943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:38.655717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:38.655815  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:38.868640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.152316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.155082  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.155138  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.368233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:39.651993  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:39.654885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:39.655026  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:39.868217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.152059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.155525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.155590  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.367907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:40.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:40.655499  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:40.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:40.867817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.152251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.154763  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.367545  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:41.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:41.654751  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:41.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:41.868012  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.152312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.154862  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.154889  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.368681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:42.652243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:42.654497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:42.654707  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:42.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.152560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.156124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.156157  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.367649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:43.652430  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:43.654968  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:43.654986  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:43.867477  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.151715  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.154833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.154926  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.368003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:44.652097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:44.655411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:44.655482  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:44.867734  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.151785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.155040  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.155294  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.367710  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:45.652316  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:45.654798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:45.654835  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:45.867771  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.151940  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.155638  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.367470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:46.652017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:46.655632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:46.655678  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:46.867796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.152166  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.155566  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.155778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:47.653210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:47.655490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:47.655647  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:47.867856  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.152084  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.155486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.155488  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.367425  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:48.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:48.654912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:48.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:48.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.151097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.155642  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.155716  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.367781  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:49.652527  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:49.654528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:49.654540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:49.867508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.152341  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.367631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:50.651795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:50.654967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:50.655191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:50.867951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.155228  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.368136  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:51.654278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:51.658434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:51.658602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:51.867554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.151825  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.154981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.155043  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.368227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:52.651587  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:52.654841  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:52.654981  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:52.868253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.151568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.154906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.368332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:53.652244  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:53.654695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:53.654772  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:53.867872  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.152199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.155137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.155272  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.367783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:54.652699  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:54.654783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:54.654979  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:54.868132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.152259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.154768  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.367668  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:55.652881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:55.655002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:55.655049  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:55.868381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.151518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.154713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.367620  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:56.651888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:56.655083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:56.655175  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:56.868708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.152144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.155438  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.155487  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.367472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:57.652234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:57.654836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:57.654874  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:57.867903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.152561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.154532  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.154668  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.367739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:58.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:58.655541  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:58.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:58.867577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.152224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.155130  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.368654  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:50:59.652953  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:50:59.654943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:50:59.654982  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:50:59.868114  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.151581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.155143  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.368473  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:00.651816  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:00.655282  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:00.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:00.867147  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.151121  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.155427  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.155456  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.367218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:01.651621  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:01.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:01.654783  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:01.867758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.152018  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.155576  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.367896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:02.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:02.655222  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:02.655273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:02.867265  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.151348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.156159  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.156250  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.367497  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:03.652167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:03.655608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:03.655715  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:03.867725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.151972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.155471  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.155479  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.367579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:04.652472  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:04.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:04.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:04.867055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.153048  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.155508  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.155556  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.367853  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:05.653083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:05.655046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:05.655090  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:05.867138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.152134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.155607  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.155674  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.367789  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:06.652335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:06.654809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:06.654932  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:06.868697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.152531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.154911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.154955  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.370805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:07.652428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:07.654916  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:07.654974  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:07.868557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.151860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.155090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.155145  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.367368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:08.651698  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:08.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:08.654852  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:08.868069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.151519  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.154937  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.154942  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.368515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:09.526750  522590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:51:09.652541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:09.655572  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:09.655659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:09.868054  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0916 23:51:10.098163  522590 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 23:51:10.098324  522590 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0916 23:51:10.152880  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.154875  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.367834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:10.652251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:10.655021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:10.655084  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:10.867384  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.151842  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.155150  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.368186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:11.652269  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:11.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:11.655256  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:11.867128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.152667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.155099  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.155107  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:12.652518  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:12.654870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:12.654893  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:12.867312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.151982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.155271  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.155332  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.367823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:13.652387  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:13.654951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:13.655146  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:13.868844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.153334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.155643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.155904  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.368482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:14.652515  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:14.655724  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:14.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:14.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.152601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.155604  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.367774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:15.652539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:15.655836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:15.655906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:15.868440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.151573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.154807  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.368168  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:16.652042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:16.655560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:16.655747  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:16.868218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.151965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.155140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.155210  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.368464  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:17.652037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:17.655823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:17.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:17.867935  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.152022  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.155517  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.367482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:18.651927  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:18.654865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:18.655024  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:18.868282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.151370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.155878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.155924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:19.651943  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:19.655352  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:19.868827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.151845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.155066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.155072  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.369339  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:20.651811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:20.654774  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:20.654963  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:20.867983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.152276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.154893  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.154944  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.367794  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:21.652538  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:21.654934  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:21.654939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:21.867898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.151949  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.155295  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.155445  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.367407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:22.651590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:22.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:22.655019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:22.867887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.152190  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.155502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.155545  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.367753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:23.652562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:23.654651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:23.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:23.867848  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.152073  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.155610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.367957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:24.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:24.654900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:24.868057  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.152408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.155409  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.155602  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.368413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:25.652052  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:25.655209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:25.655312  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:25.867380  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.151535  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.155823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.155856  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:26.651651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:26.654990  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:26.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:26.867537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.152091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.155112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.155142  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.368638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:27.654137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:27.656355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:27.656515  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:27.869096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.154581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.154673  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.367987  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:28.652294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:28.654753  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:28.654853  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:28.869651  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.152647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.154807  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.154850  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.368887  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:29.654241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:29.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:29.655196  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:29.867665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.151919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.155232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.155296  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.367463  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:30.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:30.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:30.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:30.867385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.151552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.154871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.154947  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.369090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:31.652787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:31.654631  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:31.654656  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:31.869965  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.152268  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.154797  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.154858  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.368137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:32.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:32.654729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:32.654778  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:32.868357  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.151932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.155182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.155339  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.367560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:33.651975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:33.655351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:33.655413  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:33.867981  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.152479  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.155002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.155059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.368688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:34.651549  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:34.655000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:34.655063  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:34.868189  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.151809  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.155205  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.155350  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.367322  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:35.651627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:35.752333  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:35.752426  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:35.868016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.155466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.155666  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.368191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:36.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:36.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:36.654883  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:36.868252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.152153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.155806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.155969  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.368131  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:37.652021  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:37.655754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:37.655968  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:37.869697  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.152009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.155144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.155151  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.369995  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:38.652185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:38.655536  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:38.655553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:38.867639  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.151740  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.154964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.155029  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.368608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:39.651802  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:39.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:39.654961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:39.869716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.152077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.155323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.155354  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.367481  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:40.651750  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:40.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:40.655154  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:40.867047  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.152227  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.154790  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.154936  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.367727  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:41.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:41.655578  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:41.655618  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:41.869685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.152239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.154748  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.367986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:42.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:42.654735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:42.654796  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:42.868157  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.151984  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.155093  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.155268  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.367574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:43.652278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:43.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:43.655163  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:43.867108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.151635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.155169  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.367632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:44.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:44.656348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:44.656416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:44.867492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.155082  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.368046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:45.652581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:45.655278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:45.655440  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:45.867304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.151985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.155139  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.367275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:46.652201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:46.654659  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:46.654708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:46.867813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.152102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.155410  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.155445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.368132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:47.652347  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:47.654903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:47.654929  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:47.868615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.151762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.154894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.155015  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.367728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:48.652716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:48.655105  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:48.655114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:48.867844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.151899  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.367647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:49.651960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:49.655182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:49.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:49.867701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.152323  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.154952  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.368036  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:50.652752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:50.655140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:50.655212  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:50.867998  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.152002  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.155125  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.155152  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.367814  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:51.652049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:51.655522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:51.655726  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:51.868294  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.151791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.155565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.367865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:52.652161  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:52.655512  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:52.655672  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:52.868579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.151650  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.154924  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.155034  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.369092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:53.651132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:53.655513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:53.655522  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:53.868691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.152450  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.155354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.155524  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.367600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:54.651882  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:54.655373  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:54.655408  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:54.867056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.152214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.154682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.154691  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.367828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:55.652289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:55.654838  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:55.654919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:55.868482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.155573  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.155680  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.367605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:56.652000  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:56.655613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:56.655628  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:56.867754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.152556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.155032  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.155095  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.367975  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:57.652348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:57.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:57.654741  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:57.868401  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.153486  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.155941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.156005  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.368023  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:58.652886  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:58.654744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:58.654924  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:58.867833  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.152068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.155056  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.368282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:51:59.651560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:51:59.654879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:51:59.654906  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:51:59.868124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.151834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.368228  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:00.651552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:00.654864  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:00.655039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:00.867812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.152355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.155216  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.155250  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.367206  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:01.651490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:01.655688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:01.655736  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:01.868528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.152001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.155540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.155683  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.367787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:02.652284  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:02.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:02.654849  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:02.868355  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.151870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.155589  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.369165  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:03.652124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:03.655412  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:03.655514  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:03.867952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.152595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.154738  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.154768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.368177  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:04.651492  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:04.654766  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:04.654890  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:04.867847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.152178  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.155407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.155591  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.367682  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:05.652426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:05.655066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:05.655077  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:05.868692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.151879  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.154999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.155191  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.368983  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:06.652433  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:06.655105  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:06.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:06.867405  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.151744  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.155222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.155303  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.367552  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:07.651596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:07.654914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:07.655059  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:07.868458  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.152215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.154616  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.154655  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.367845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:08.652783  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:08.655112  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:08.655120  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:08.868071  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.151544  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.155208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.155226  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.367504  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:09.652199  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:09.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:09.655205  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:09.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.152537  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.155961  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.155972  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.367914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:10.652499  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:10.655560  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:10.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:10.867688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.153765  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.156270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.156301  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.367137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:11.652938  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:11.655212  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:11.655254  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:11.867526  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.152762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.155539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.155611  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.367745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:12.653490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:12.655575  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:12.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:12.867930  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.152233  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.154692  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.154928  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.368718  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:13.652385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:13.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:13.655076  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:13.868860  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.152353  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.154742  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.155285  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.367623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:14.651871  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:14.655140  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:14.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:14.867455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.151851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.155247  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.367164  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:15.652193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:15.655452  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:15.655496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:15.867913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.152181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.155667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.155764  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.368289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:16.651762  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:16.654913  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:16.654985  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:16.868273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.152523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.155730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.156762  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.369278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:17.653153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:17.656847  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:17.656957  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:17.872367  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.152950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.155208  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.368554  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:18.652083  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:18.656110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:18.656132  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:18.867845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.152657  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.155336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.155360  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.367646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:19.652603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:19.655013  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:19.655062  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:19.868632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.151907  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.155327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.155416  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.367287  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:20.651614  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:20.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:20.654920  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:20.867932  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.152185  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.155533  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.155722  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.367894  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:21.652307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:21.654756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:21.654995  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:21.869050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.151999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.155129  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.155241  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.367234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:22.651475  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:22.655728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:22.655801  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:22.867063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.152370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.154775  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.368226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:23.651514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:23.654966  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:23.654979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:23.867379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.152074  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.155478  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.155627  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.367613  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:24.651861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:24.655241  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:24.655314  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:24.867408  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.151695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.155019  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.155047  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.368563  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:25.652014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:25.655145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:25.655425  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:25.867208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.151957  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.156991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.157177  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.367383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:26.651982  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:26.655413  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:26.655465  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:26.867368  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.151925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.154970  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.155019  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.368160  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:27.651611  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:27.654847  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:27.654859  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:27.867942  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.152874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.154630  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.154694  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.368049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:28.651257  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:28.655624  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:28.655667  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:28.867801  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.152524  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.156020  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.156108  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:29.651663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:29.655003  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:29.655207  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:29.867344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.152248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.154952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.155114  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.368836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:30.652345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:30.655054  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:30.655103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:30.868484  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.151558  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.154855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.154863  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.368442  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:31.651568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:31.655113  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:31.655180  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:31.868266  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.151815  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.155138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.155240  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.367272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:32.651711  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:32.655134  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:32.655194  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:32.867490  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.151598  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.155259  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.155287  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.367609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:33.651854  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:33.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:33.655324  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:33.867858  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.153080  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.155098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.155341  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.367674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:34.651945  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:34.655335  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:34.655353  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:34.867581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.151897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.155637  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.155683  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.367456  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:35.652090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:35.655528  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:35.655648  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:35.867911  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.152606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.154994  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.368455  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:36.652303  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:36.655073  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:36.655187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:36.867363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.151724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.155448  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.155569  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.367351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:37.651839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:37.655606  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:37.655791  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:37.868338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.152142  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.155217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.155532  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.368358  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:38.651898  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:38.655540  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:38.655567  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:38.868334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.154861  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.154907  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.368768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:39.652068  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:39.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:39.655573  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:39.869959  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.152619  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.154675  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.367925  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:40.652249  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:40.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:40.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:40.868289  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.152483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.154991  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.155032  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.368359  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:41.651646  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:41.655296  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:41.655374  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:41.867137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.152187  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.155835  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.155854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.367912  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:42.652016  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:42.655327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:42.655409  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:42.867319  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.151608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.154828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.155016  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.368488  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:43.653811  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:43.656445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:43.656565  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:43.867120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.152791  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.154576  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.154723  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.367602  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:44.651437  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:44.655676  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:44.655824  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:44.867828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.152180  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.155737  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.155763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.367992  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:45.652246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:45.654603  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:45.654734  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:45.868092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.152800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.154702  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.154910  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.367595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:46.651605  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:46.654693  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:46.654706  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:46.867547  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.151877  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.155211  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.367273  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:47.651756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:47.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:47.655367  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:47.867318  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.151786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.155034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.155115  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.368351  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:48.651521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:48.655726  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:48.655766  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:48.868163  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.151496  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.155243  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.366955  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:49.652531  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:49.655173  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:49.655184  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:49.867097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.152201  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.155505  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.155636  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.367562  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:50.651843  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:50.655301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:50.655384  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:50.868028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.152914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.155252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.155462  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.367149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:51.651713  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:51.655354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:51.655450  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:51.867440  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.151891  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.155305  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.155443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.368461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:52:52.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:52.655667  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:52.655854  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:52.901721  522590 kapi.go:107] duration metric: took 3m34.537544348s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 23:52:52.906543  522590 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-069011 cluster.
	I0916 23:52:52.912324  522590 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 23:52:52.913737  522590 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 23:52:53.153197  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.155660  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:53.155666  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.652828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:53.655014  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:53.655110  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.152324  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.155476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.155496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:54.652106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:54.655581  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:54.655609  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.152128  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.155885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.156039  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:55.652641  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:55.654855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:55.654978  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.152674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.154874  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.155000  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:56.652035  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:56.655457  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:56.655496  522590 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:52:57.152186  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.155542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:57.155561  522590 kapi.go:107] duration metric: took 3m45.503354476s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 23:52:57.652350  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:57.655498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.154850  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:58.652665  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:58.654696  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.152543  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.154283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:52:59.653277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:52:59.659941  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.152852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.154649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:00.652327  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:00.654800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.152414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.154525  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:01.651817  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:01.655138  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.154656  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:02.653502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:02.656037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.151857  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.155055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:03.652334  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:03.654876  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.152174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.155870  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:04.653124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:04.655053  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.153568  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.155625  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:05.653230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:05.655236  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.152361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.154928  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:06.653059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:06.656200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.152336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.155224  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:07.652346  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:07.655712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155752  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:08.155824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.653610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:08.655208  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.152628  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.154934  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:09.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:09.655144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.154348  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.155986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:10.652369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:10.655443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.152148  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.155670  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:11.652553  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:11.655243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.152796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.155106  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:12.651747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:12.655634  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.153010  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.155374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:13.654738  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:13.656482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.152952  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.155229  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:14.652523  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:14.655028  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.152364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.155721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:15.655954  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:15.656795  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.152967  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.154926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:16.653027  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:16.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.153039  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.154839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:17.653034  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:17.655038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.156123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:18.651828  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:18.654999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.151648  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.154596  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:19.652222  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:19.654551  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155150  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:20.155193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.652029  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:20.655101  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.151749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.154961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:21.651672  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:21.655009  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.152329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.154730  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:22.652063  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:22.655272  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.152182  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.155422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:23.652218  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:23.654560  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.152574  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.155253  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:24.652502  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:24.655345  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.151663  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.155115  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:25.651721  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:25.655044  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.152383  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.155509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:26.652354  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:26.654747  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.169011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.169001  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:27.653424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:27.655714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.152979  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.254144  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:28.651804  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:28.655470  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.151827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.155108  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:29.652422  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:29.655116  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.152193  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.155976  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:30.652210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:30.654980  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.151709  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.155038  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:31.651589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:31.655050  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.151868  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.155145  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:32.652363  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:32.655892  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.151643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.154810  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:33.653583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:33.655279  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.153153  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.155522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:34.652584  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:34.655570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.151580  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.156561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:35.652732  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:35.655133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.155361  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.158601  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:36.652275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:36.654674  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.153755  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.155714  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:37.652926  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:37.654759  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.151466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.154733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:38.653313  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:38.655745  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.152234  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.155638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:39.652445  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:39.654541  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.152461  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.155143  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:40.652312  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:40.654686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.152156  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.155170  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:41.651644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:41.654733  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.152309  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.154360  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:42.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:42.654550  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.151904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.154960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:43.652091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:43.655542  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.151570  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:44.652708  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:44.654522  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.151593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.154608  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:45.651922  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:45.655174  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.151376  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.155482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:46.652627  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:46.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.151782  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.154824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:47.652429  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:47.654757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.152137  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.154936  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:48.651792  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:48.654929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.152207  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.155200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:49.652077  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:49.655059  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.152055  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.155283  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:50.651757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:50.654677  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.152004  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.154803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:51.653046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:51.654923  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.152123  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.154978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:52.651950  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:52.654986  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.151595  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.154725  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:53.652661  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:53.654540  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.152011  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.155079  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:54.652239  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:54.654476  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.151772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.155226  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:55.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:55.655124  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.151415  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.155604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:56.652777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:56.654897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.152275  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.155829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:57.653025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:57.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.152978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.154716  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:58.652635  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:58.654449  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.152070  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.155270  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:53:59.652577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:53:59.655424  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.152756  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.154426  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:00.651964  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:00.655181  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.151369  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.155561  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:01.651593  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:01.654586  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.152252  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.154655  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:02.652610  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:02.654423  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.152030  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.155167  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:03.651855  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:03.654881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.151556  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.154852  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:04.652834  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:04.654500  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.152255  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.154344  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:05.652483  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:05.655325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.151729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.154664  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:06.652904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:06.654681  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.152267  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:07.652291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:07.654988  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.151577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.154865  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:08.652678  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:08.654618  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.152302  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.154688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:09.653092  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:09.654963  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.151758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.154735  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:10.652999  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:10.654845  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.151513  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.154498  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:11.652494  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:11.654909  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.151298  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.155557  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:12.652643  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:12.654491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.152751  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:13.652126  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:13.655183  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.151763  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.155046  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:14.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:14.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.152658  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.154758  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:15.652985  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:15.655060  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.151705  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.154775  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:16.652773  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:16.654589  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.152592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.155097  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:17.651889  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:17.655277  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.152217  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:18.652903  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:18.654813  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.152686  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.154506  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:19.652260  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:19.654251  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.152385  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.154777  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:20.652915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:20.654754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.152381  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.155278  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:21.651555  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:21.654768  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.152695  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.154647  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:22.652919  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:22.654785  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.151929  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.155096  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:23.652215  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:23.654600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.152243  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.154806  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:24.653577  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:24.655336  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.151915  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.154836  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:25.651480  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:25.655757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.152467  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.154712  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:26.653379  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:26.655466  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.151800  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.155291  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:27.653102  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:27.655592  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.153140  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.155428  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:28.652276  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:28.654838  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.153210  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.155329  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:29.652338  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:29.654662  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.152491  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.154729  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:30.653037  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:30.654741  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.152830  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.154474  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:31.652230  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:31.654509  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.151920  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.154827  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:32.653191  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:32.655219  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.151306  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.155960  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:33.651717  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:33.655110  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.152304  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.154575  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:34.652514  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:34.654778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.152332  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.154701  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:35.652961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:35.654516  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.151632  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.154754  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:36.654330  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:36.655691  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.152418  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.154851  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:37.651435  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:37.654582  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.153087  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.155042  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:38.652337  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:38.654583  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.152997  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.154432  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:39.652600  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:39.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.152066  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.154971  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:40.651875  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:40.655064  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.152238  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.154411  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:41.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:41.655370  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.152256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.154799  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:42.652896  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:42.655256  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.152778  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.154615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:43.652772  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:43.654597  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.152798  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.155091  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:44.652248  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:44.654728  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.152282  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.154468  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:45.652120  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:45.655482  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.151671  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.154724  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:46.653242  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:46.654823  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.152812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.155015  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:47.651579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:47.654786  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.152839  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.155119  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:48.652214  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:48.654840  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.152996  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.155254  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:49.651623  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:49.654685  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.153897  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.155803  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:50.652443  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:50.654867  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.152374  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.154640  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:51.653033  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:51.654888  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.152649  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.154604  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:52.652521  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:52.654615  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.152209  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.154579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:53.652590  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:53.654414  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.152200  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.155017  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:54.651951  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:54.655307  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.151878  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.155133  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:55.651739  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:55.654805  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.152326  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.154364  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:56.652520  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:56.654812  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.152821  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.154939  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:57.651434  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:57.655826  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.152103  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.155132  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:58.651824  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:58.655072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.154539  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.155149  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:54:59.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:54:59.654796  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.151638  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.154787  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:00.652885  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:00.654626  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.152069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.155444  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:01.652069  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:01.655407  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.152172  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.156173  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:02.652301  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:02.654808  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.153293  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.155684  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:03.652844  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:03.654749  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.152881  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.155246  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:04.652609  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:04.655098  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.151757  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.155258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:05.652511  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:05.654688  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.152258  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.154829  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:06.653049  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:06.654904  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.151579  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.154591  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:07.652331  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:07.654994  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.151784  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.154921  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:08.652325  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:08.655067  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.151900  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.155072  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:09.651978  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:09.655300  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.151961  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.154914  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:10.652232  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:10.654644  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.152090  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.155188  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:55:11.652025  522590 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:55:11.652821  522590 kapi.go:107] duration metric: took 6m0.000625805s to wait for kubernetes.io/minikube-addons=registry ...
	W0916 23:55:11.652991  522590 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0916 23:55:12.148606  522590 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=csi-hostpath-driver" : [client rate limiter Wait returned an error: context deadline exceeded]
	I0916 23:55:12.148655  522590 kapi.go:107] duration metric: took 6m0.000415083s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	W0916 23:55:12.148771  522590 out.go:285] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	I0916 23:55:12.151062  522590 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, ingress-dns, amd-gpu-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, volumesnapshots, gcp-auth, ingress
	I0916 23:55:12.152575  522590 addons.go:514] duration metric: took 6m2.25568849s for enable addons: enabled=[registry-creds nvidia-device-plugin ingress-dns amd-gpu-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher cloud-spanner metrics-server yakd volumesnapshots gcp-auth ingress]
	I0916 23:55:12.152638  522590 start.go:246] waiting for cluster config update ...
	I0916 23:55:12.152661  522590 start.go:255] writing updated cluster config ...
	I0916 23:55:12.152955  522590 ssh_runner.go:195] Run: rm -f paused
	I0916 23:55:12.157549  522590 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:12.161141  522590 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.165703  522590 pod_ready.go:94] pod "coredns-66bc5c9577-m872b" is "Ready"
	I0916 23:55:12.165731  522590 pod_ready.go:86] duration metric: took 4.567019ms for pod "coredns-66bc5c9577-m872b" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.168067  522590 pod_ready.go:83] waiting for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.172550  522590 pod_ready.go:94] pod "etcd-addons-069011" is "Ready"
	I0916 23:55:12.172583  522590 pod_ready.go:86] duration metric: took 4.489308ms for pod "etcd-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.174872  522590 pod_ready.go:83] waiting for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.179401  522590 pod_ready.go:94] pod "kube-apiserver-addons-069011" is "Ready"
	I0916 23:55:12.179432  522590 pod_ready.go:86] duration metric: took 4.532992ms for pod "kube-apiserver-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.181473  522590 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.561817  522590 pod_ready.go:94] pod "kube-controller-manager-addons-069011" is "Ready"
	I0916 23:55:12.561846  522590 pod_ready.go:86] duration metric: took 380.349392ms for pod "kube-controller-manager-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:12.763149  522590 pod_ready.go:83] waiting for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.161850  522590 pod_ready.go:94] pod "kube-proxy-v85kq" is "Ready"
	I0916 23:55:13.161880  522590 pod_ready.go:86] duration metric: took 398.696904ms for pod "kube-proxy-v85kq" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.362802  522590 pod_ready.go:83] waiting for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761895  522590 pod_ready.go:94] pod "kube-scheduler-addons-069011" is "Ready"
	I0916 23:55:13.761929  522590 pod_ready.go:86] duration metric: took 399.094008ms for pod "kube-scheduler-addons-069011" in "kube-system" namespace to be "Ready" or be gone ...
	I0916 23:55:13.761944  522590 pod_ready.go:40] duration metric: took 1.604356273s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0916 23:55:13.810173  522590 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0916 23:55:13.812279  522590 out.go:179] * Done! kubectl is now configured to use "addons-069011" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:02:36 addons-069011 crio[933]: time="2025-09-17 00:02:36.174761085Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=c3be7c1d-9583-42c5-8235-17f756a693c8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:38 addons-069011 crio[933]: time="2025-09-17 00:02:38.420564853Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=371c09d4-b5e2-4516-b98d-56e0ef8ca3ce name=/runtime.v1.ImageService/PullImage
	Sep 17 00:02:38 addons-069011 crio[933]: time="2025-09-17 00:02:38.426762485Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 17 00:02:46 addons-069011 crio[933]: time="2025-09-17 00:02:46.174573325Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=e92776c5-2260-4c62-88d0-bac25ecc4762 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:46 addons-069011 crio[933]: time="2025-09-17 00:02:46.174931830Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=e92776c5-2260-4c62-88d0-bac25ecc4762 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174617707Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=d6a13927-c266-49dc-be8d-0a633ee3e91a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174749174Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=faf20ee6-c881-4a52-a381-2762590afbb2 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174937285Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=faf20ee6-c881-4a52-a381-2762590afbb2 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:47 addons-069011 crio[933]: time="2025-09-17 00:02:47.174957387Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=d6a13927-c266-49dc-be8d-0a633ee3e91a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.277094267Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb/POD" id=20892a75-d785-4bce-b14d-b47efa4aeae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.277175509Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.296640719Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb Namespace:local-path-storage ID:01b234e4ce4b3213bda6de85ea1ad335b319f7a84a97730657c14da112ee9249 UID:ed2099f3-5b8b-4c41-a38b-24d1fff3085a NetNS:/var/run/netns/6d63d1e4-3ce6-4389-9d94-63db717f05be Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.296677247Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb to CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.307072949Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb Namespace:local-path-storage ID:01b234e4ce4b3213bda6de85ea1ad335b319f7a84a97730657c14da112ee9249 UID:ed2099f3-5b8b-4c41-a38b-24d1fff3085a NetNS:/var/run/netns/6d63d1e4-3ce6-4389-9d94-63db717f05be Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.307207883Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb for CNI network kindnet (type=ptp)"
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.308077336Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.308925428Z" level=info msg="Ran pod sandbox 01b234e4ce4b3213bda6de85ea1ad335b319f7a84a97730657c14da112ee9249 with infra container: local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb/POD" id=20892a75-d785-4bce-b14d-b47efa4aeae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.310171991Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f39e3aeb-8e27-4ab4-8e3b-cf1fe99824e3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:51 addons-069011 crio[933]: time="2025-09-17 00:02:51.310409133Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f39e3aeb-8e27-4ab4-8e3b-cf1fe99824e3 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:59 addons-069011 crio[933]: time="2025-09-17 00:02:59.175030495Z" level=info msg="Checking image status: docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f" id=dc2cc620-00e1-46bd-8174-f9fdd52bc052 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:59 addons-069011 crio[933]: time="2025-09-17 00:02:59.175030505Z" level=info msg="Checking image status: docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89" id=ebf87352-c92f-4d9d-96ae-d32dac0c8e3d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:59 addons-069011 crio[933]: time="2025-09-17 00:02:59.175368358Z" level=info msg="Image docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f not found" id=dc2cc620-00e1-46bd-8174-f9fdd52bc052 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:02:59 addons-069011 crio[933]: time="2025-09-17 00:02:59.175460217Z" level=info msg="Image docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 not found" id=ebf87352-c92f-4d9d-96ae-d32dac0c8e3d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:03:01 addons-069011 crio[933]: time="2025-09-17 00:03:01.178655616Z" level=info msg="Checking image status: docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d" id=f905bfc2-65d3-468a-a468-8f9e942ffca8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:03:01 addons-069011 crio[933]: time="2025-09-17 00:03:01.179030558Z" level=info msg="Image docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d not found" id=f905bfc2-65d3-468a-a468-8f9e942ffca8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8fc15d8cb7dd5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago       Running             csi-snapshotter                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	295b9edc02db1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          5 minutes ago       Running             csi-provisioner                          0                   e614fc1047195       csi-hostpathplugin-s98vb
	3bebfc3ce5f89       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   b34e9dc849123       busybox
	0994d530b2186       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   e614fc1047195       csi-hostpathplugin-s98vb
	d78ede218b3d9       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   e614fc1047195       csi-hostpathplugin-s98vb
	16a4495ac9a55       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                10 minutes ago      Running             node-driver-registrar                    0                   e614fc1047195       csi-hostpathplugin-s98vb
	ab63cb98da9fa       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             10 minutes ago      Running             controller                               0                   1c8433f3bdf68       ingress-nginx-controller-9cc49f96f-4m84v
	cb0aaa55cf5e9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            10 minutes ago      Running             gadget                                   0                   38b62a86f7523       gadget-g862x
	75b35093f1f14       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              11 minutes ago      Running             registry-proxy                           0                   f2e835ff4c172       registry-proxy-gtpv9
	af48fae595f24       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      11 minutes ago      Running             volume-snapshot-controller               0                   7daa29e729a88       snapshot-controller-7d9fbc56b8-st98r
	fce1ccd8d33b3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   11 minutes ago      Running             csi-external-health-monitor-controller   0                   e614fc1047195       csi-hostpathplugin-s98vb
	87609248fc31a       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               12 minutes ago      Running             cloud-spanner-emulator                   0                   843001c23149a       cloud-spanner-emulator-85f6b7fc65-wtp6g
	0e4759a430832       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             12 minutes ago      Exited              patch                                    2                   0937f6f98ea11       ingress-nginx-admission-patch-sp7zb
	3c653d4c50b5c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      12 minutes ago      Running             volume-snapshot-controller               0                   4be25aad82a4e       snapshot-controller-7d9fbc56b8-s7m82
	11ae5f470bf10       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   12 minutes ago      Exited              create                                   0                   d933a3ae75df0       ingress-nginx-admission-create-wj8lw
	0957eacca23bd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              12 minutes ago      Running             csi-resizer                              0                   b8131d2ee78de       csi-hostpath-resizer-0
	ad4a09c21105c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             13 minutes ago      Running             csi-attacher                             0                   15f9a9c33b53e       csi-hostpath-attacher-0
	c1b11b9e2fae1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             13 minutes ago      Running             local-path-provisioner                   0                   be69758a594c2       local-path-provisioner-648f6765c9-4qs6g
	7d0db99be084d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             13 minutes ago      Running             storage-provisioner                      0                   e26878809420e       storage-provisioner
	b62ac7b1e2d93       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             13 minutes ago      Running             coredns                                  0                   90cd65a058e3e       coredns-66bc5c9577-m872b
	81f4db589dfd0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             13 minutes ago      Running             kindnet-cni                              0                   282dceccf27e4       kindnet-hn7tx
	8204c89cdc90d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                                             13 minutes ago      Running             kube-proxy                               0                   076ce47b67764       kube-proxy-v85kq
	d1d2d3ef1a2d6       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                                             14 minutes ago      Running             kube-controller-manager                  0                   2befa508c819b       kube-controller-manager-addons-069011
	f4991aa96dbe9       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                                             14 minutes ago      Running             kube-apiserver                           0                   24f1de8dafedd       kube-apiserver-addons-069011
	ecbc264153ff2       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                                             14 minutes ago      Running             kube-scheduler                           0                   3af000cb5a57c       kube-scheduler-addons-069011
	5a81076e6d9a8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             14 minutes ago      Running             etcd                                     0                   f590790ed13d4       etcd-addons-069011
	
	
	==> coredns [b62ac7b1e2d935063ca8c0594642886e49ad0423507f04d148e7bd385ca935ce] <==
	[INFO] 10.244.0.16:48454 - 16620 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006321836s
	[INFO] 10.244.0.16:48454 - 21780 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000105592s
	[INFO] 10.244.0.16:48454 - 17986 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000113125s
	[INFO] 10.244.0.16:48454 - 261 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000146724s
	[INFO] 10.244.0.16:48454 - 8277 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000164781s
	[INFO] 10.244.0.16:48454 - 28338 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000078273s
	[INFO] 10.244.0.16:48454 - 34355 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000088777s
	[INFO] 10.244.0.16:48454 - 25987 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000154054s
	[INFO] 10.244.0.16:48454 - 36104 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000115139s
	[INFO] 10.244.0.16:57222 - 7157 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000214762s
	[INFO] 10.244.0.16:57222 - 60851 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000197601s
	[INFO] 10.244.0.16:57222 - 32093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00021264s
	[INFO] 10.244.0.16:57222 - 28961 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00022811s
	[INFO] 10.244.0.16:57222 - 19561 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000128894s
	[INFO] 10.244.0.16:57222 - 9 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000155659s
	[INFO] 10.244.0.16:57222 - 36165 "A IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.004553707s
	[INFO] 10.244.0.16:57222 - 60923 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 73 false 1232" NXDOMAIN qr,rd,ra 62 0.006338783s
	[INFO] 10.244.0.16:57222 - 34363 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.00008939s
	[INFO] 10.244.0.16:57222 - 16274 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 102 false 1232" NXDOMAIN qr,aa,rd,ra 198 0.000131534s
	[INFO] 10.244.0.16:57222 - 27190 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.00005888s
	[INFO] 10.244.0.16:57222 - 5447 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 91 false 1232" NXDOMAIN qr,aa,rd,ra 185 0.000092432s
	[INFO] 10.244.0.16:57222 - 49897 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.00007646s
	[INFO] 10.244.0.16:57222 - 23995 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 83 false 1232" NXDOMAIN qr,aa,rd,ra 177 0.000104159s
	[INFO] 10.244.0.16:57222 - 19477 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000109441s
	[INFO] 10.244.0.16:57222 - 22331 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000145462s
	
	
	==> describe nodes <==
	Name:               addons-069011
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-069011
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-069011
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_49_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-069011
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-069011"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:49:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-069011
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:03:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Sep 2025 23:58:45 +0000   Tue, 16 Sep 2025 23:49:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-069011
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e6a06e1e17043f19f3b8f5ea0927359
	  System UUID:                fa23b867-4022-409a-8baa-bf981ffedafe
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  default                     cloud-spanner-emulator-85f6b7fc65-wtp6g                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  gadget                      gadget-g862x                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-4m84v                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         13m
	  kube-system                 amd-gpu-device-plugin-flfw9                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-m872b                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-s98vb                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-069011                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-hn7tx                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-addons-069011                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-069011                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-v85kq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-069011                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 registry-66898fdd98-bl4r5                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-proxy-gtpv9                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-s7m82                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-st98r                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  local-path-storage          local-path-provisioner-648f6765c9-4qs6g                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-069011 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-069011 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-069011 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node addons-069011 event: Registered Node addons-069011 in Controller
	  Normal  NodeReady                13m   kubelet          Node addons-069011 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [5a81076e6d9a8c9983866e09b1190810cd0059c34edeae1a479f9d18f3003a91] <==
	{"level":"warn","ts":"2025-09-16T23:49:00.991705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:00.999124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.014667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.021210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.034514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.041663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.048524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.054851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.061680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.068240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.075225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.081757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.105206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.111554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:01.154896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.666348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:12.673196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.575058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.581784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.598000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-16T23:49:38.605378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-16T23:59:00.630787Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1449}
	{"level":"info","ts":"2025-09-16T23:59:00.656834Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1449,"took":"25.282457ms","hash":3232880921,"current-db-size-bytes":5799936,"current-db-size":"5.8 MB","current-db-size-in-use-bytes":3645440,"current-db-size-in-use":"3.6 MB"}
	{"level":"info","ts":"2025-09-16T23:59:00.656898Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3232880921,"revision":1449,"compact-revision":-1}
	
	
	==> kernel <==
	 00:03:02 up  2:45,  0 users,  load average: 1.03, 4.00, 29.17
	Linux addons-069011 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [81f4db589dfd0f8f014a7fc056f2d7f752ecc52737aea10ae2f8a98d0242428b] <==
	I0917 00:01:00.185621       1 main.go:301] handling current node
	I0917 00:01:10.184174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:10.184202       1 main.go:301] handling current node
	I0917 00:01:20.189514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:20.189560       1 main.go:301] handling current node
	I0917 00:01:30.185700       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:30.186123       1 main.go:301] handling current node
	I0917 00:01:40.186524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:40.186565       1 main.go:301] handling current node
	I0917 00:01:50.186736       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:01:50.186791       1 main.go:301] handling current node
	I0917 00:02:00.185635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:00.185677       1 main.go:301] handling current node
	I0917 00:02:10.184347       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:10.184424       1 main.go:301] handling current node
	I0917 00:02:20.185542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:20.185579       1 main.go:301] handling current node
	I0917 00:02:30.184600       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:30.184649       1 main.go:301] handling current node
	I0917 00:02:40.184804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:40.184855       1 main.go:301] handling current node
	I0917 00:02:50.185040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:02:50.185076       1 main.go:301] handling current node
	I0917 00:03:00.185549       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:03:00.185583       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f4991aa96dbe98af7f934784cdc7973d5aabec72325938f0e98ad8efde3d06e3] <==
	I0916 23:51:40.656101       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:52:37.080075       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:53:06.365528       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:53:51.505661       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:54:19.846477       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:21.099421       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:55:29.068080       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:56:24.856015       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0916 23:56:38.562764       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43110: use of closed network connection
	E0916 23:56:38.758708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43158: use of closed network connection
	I0916 23:56:47.547088       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0916 23:56:47.750812       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.94.177"}
	I0916 23:56:48.077381       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.184.141"}
	I0916 23:56:56.387694       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0916 23:56:58.875443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:57:28.517320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:21.717919       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:58:53.740979       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0916 23:59:01.561467       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0916 23:59:46.839359       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:00:03.548840       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:10.960424       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:01:15.531695       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:28.446522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:02:31.841808       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [d1d2d3ef1a2d61d604d7b7b71875c31a98127791ebbcaaae9e7c5dcebb1fd036] <==
	I0916 23:49:08.558692       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0916 23:49:08.559424       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0916 23:49:08.560582       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0916 23:49:08.560682       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0916 23:49:08.562044       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0916 23:49:08.562105       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0916 23:49:08.562171       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0916 23:49:08.562209       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 23:49:08.562217       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0916 23:49:08.562221       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0916 23:49:08.563325       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:08.564561       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0916 23:49:08.570797       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-069011" podCIDRs=["10.244.0.0/24"]
	I0916 23:49:08.576824       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0916 23:49:38.568454       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 23:49:38.568633       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0916 23:49:38.568684       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0916 23:49:38.586865       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0916 23:49:38.591210       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0916 23:49:38.668805       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0916 23:49:38.692110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0916 23:49:53.514314       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0916 23:56:52.202912       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0916 23:58:53.764380       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0917 00:01:02.592919       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [8204c89cdc90d58370aa745a3053c12e5b976409a1e0bedddf9508ac3e770c1f] <==
	I0916 23:49:09.803647       1 server_linux.go:53] "Using iptables proxy"
	I0916 23:49:09.874911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:49:09.984976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:49:09.985628       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0916 23:49:09.986296       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:49:10.154642       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 23:49:10.159433       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:49:10.183201       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:49:10.195463       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:49:10.195513       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:49:10.199563       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:49:10.199664       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:49:10.200188       1 config.go:309] "Starting node config controller"
	I0916 23:49:10.200265       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:49:10.200334       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:49:10.200369       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:49:10.200991       1 config.go:200] "Starting service config controller"
	I0916 23:49:10.201078       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:49:10.299859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:49:10.300474       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:49:10.300501       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0916 23:49:10.302086       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ecbc264153ff2a219390febac6665f8efc1a49ab24db502b79ba6888e6bd5b71] <==
	E0916 23:49:01.591306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0916 23:49:01.591979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:01.591995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:01.592038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:01.592032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0916 23:49:01.592058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:01.592081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0916 23:49:01.592128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:01.592273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:01.592272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:49:01.592315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0916 23:49:02.478666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0916 23:49:02.478742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0916 23:49:02.495998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:49:02.533597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:49:02.645572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:49:02.658831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0916 23:49:02.700650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:49:02.730028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:49:02.731014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0916 23:49:02.807698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:49:02.811032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:49:02.813063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0916 23:49:02.832467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I0916 23:49:05.387364       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:02:34 addons-069011 kubelet[1557]: I0917 00:02:34.175036    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-flfw9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:34 addons-069011 kubelet[1557]: E0917 00:02:34.176137    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	Sep 17 00:02:34 addons-069011 kubelet[1557]: E0917 00:02:34.350437    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067354350122132  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:34 addons-069011 kubelet[1557]: E0917 00:02:34.350475    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067354350122132  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:36 addons-069011 kubelet[1557]: E0917 00:02:36.175098    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.419997    1557 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.420068    1557 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.420288    1557 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(0b15e693-4577-4039-b409-5badaa871bfc): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.420346    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:02:38 addons-069011 kubelet[1557]: E0917 00:02:38.538636    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="0b15e693-4577-4039-b409-5badaa871bfc"
	Sep 17 00:02:44 addons-069011 kubelet[1557]: E0917 00:02:44.352870    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067364352566794  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:44 addons-069011 kubelet[1557]: E0917 00:02:44.352916    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067364352566794  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:46 addons-069011 kubelet[1557]: E0917 00:02:46.175316    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="3ebf3aba-8898-42b1-a92e-3bc50dd56aab"
	Sep 17 00:02:47 addons-069011 kubelet[1557]: I0917 00:02:47.174228    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-flfw9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:47 addons-069011 kubelet[1557]: E0917 00:02:47.175251    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	Sep 17 00:02:47 addons-069011 kubelet[1557]: E0917 00:02:47.175265    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	Sep 17 00:02:51 addons-069011 kubelet[1557]: I0917 00:02:51.081989    1557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkms\" (UniqueName: \"kubernetes.io/projected/ed2099f3-5b8b-4c41-a38b-24d1fff3085a-kube-api-access-lgkms\") pod \"helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb\" (UID: \"ed2099f3-5b8b-4c41-a38b-24d1fff3085a\") " pod="local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb"
	Sep 17 00:02:51 addons-069011 kubelet[1557]: I0917 00:02:51.082068    1557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ed2099f3-5b8b-4c41-a38b-24d1fff3085a-data\") pod \"helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb\" (UID: \"ed2099f3-5b8b-4c41-a38b-24d1fff3085a\") " pod="local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb"
	Sep 17 00:02:51 addons-069011 kubelet[1557]: I0917 00:02:51.082119    1557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ed2099f3-5b8b-4c41-a38b-24d1fff3085a-script\") pod \"helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb\" (UID: \"ed2099f3-5b8b-4c41-a38b-24d1fff3085a\") " pod="local-path-storage/helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb"
	Sep 17 00:02:54 addons-069011 kubelet[1557]: E0917 00:02:54.354782    1557 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067374354473152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:54 addons-069011 kubelet[1557]: E0917 00:02:54.354828    1557 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067374354473152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:439241}  inodes_used:{value:177}}"
	Sep 17 00:02:59 addons-069011 kubelet[1557]: I0917 00:02:59.174341    1557 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-flfw9" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:02:59 addons-069011 kubelet[1557]: E0917 00:02:59.175750    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: reading manifest sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89 in docker.io/kicbase/minikube-ingress-dns: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="3ebf3aba-8898-42b1-a92e-3bc50dd56aab"
	Sep 17 00:02:59 addons-069011 kubelet[1557]: E0917 00:02:59.175794    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"amd-gpu-device-plugin\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/rocm/k8s-device-plugin:1.25.2.8@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f\\\": ErrImagePull: reading manifest sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f in docker.io/rocm/k8s-device-plugin: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/amd-gpu-device-plugin-flfw9" podUID="b2f08e52-5a20-4c80-bc6c-a073ebe5797b"
	Sep 17 00:03:01 addons-069011 kubelet[1557]: E0917 00:03:01.179592    1557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d\\\": ErrImagePull: reading manifest sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-66898fdd98-bl4r5" podUID="34782a61-58ac-458e-ab2f-7a22bac44c65"
	
	
	==> storage-provisioner [7d0db99be084d7a7996f085af51ba0b4b9263d1a30c5ba98cac79995b3641b35] <==
	W0917 00:02:36.486541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:38.490154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:38.495746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:40.499229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:40.503656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:42.506923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:42.511258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:44.514610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:44.519161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:46.523063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:46.527609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:48.531031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:48.535325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:50.539159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:50.543383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:52.546634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:52.550987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:54.554589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:54.558969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:56.562682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:56.567680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:58.570904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:02:58.575823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:03:00.579287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:03:00.584190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-069011 -n addons-069011
helpers_test.go:269: (dbg) Run:  kubectl --context addons-069011 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/AmdGpuDevicePlugin]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1 (89.076234ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Tue, 16 Sep 2025 23:56:47 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.24
	IPs:
	  IP:  10.244.0.24
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kksmh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kksmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m16s                default-scheduler  Successfully assigned default/nginx to addons-069011
	  Warning  Failed     85s (x3 over 4m30s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     85s (x3 over 4m30s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    59s (x4 over 4m30s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     59s (x4 over 4m30s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    44s (x4 over 6m15s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-069011/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:01:13 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfz5d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-rfz5d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  110s                default-scheduler  Successfully assigned default/task-pv-pod to addons-069011
	  Warning  Failed     25s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     25s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    25s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     25s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    13s (x2 over 110s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s54zg (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-s54zg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wj8lw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sp7zb" not found
	Error from server (NotFound): pods "amd-gpu-device-plugin-flfw9" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "registry-66898fdd98-bl4r5" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-069011 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-wj8lw ingress-nginx-admission-patch-sp7zb amd-gpu-device-plugin-flfw9 kube-ingress-dns-minikube registry-66898fdd98-bl4r5 helper-pod-create-pvc-b66829ae-c3bf-4791-ad4d-a10eaa2a7feb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (363.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-836309 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-836309 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-836309 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-836309 --alsologtostderr -v=1] stderr:
I0917 00:17:41.103532  583280 out.go:360] Setting OutFile to fd 1 ...
I0917 00:17:41.104617  583280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:17:41.104631  583280 out.go:374] Setting ErrFile to fd 2...
I0917 00:17:41.104635  583280 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:17:41.104866  583280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
I0917 00:17:41.105187  583280 mustload.go:65] Loading cluster: functional-836309
I0917 00:17:41.105581  583280 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:17:41.105969  583280 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
I0917 00:17:41.124680  583280 host.go:66] Checking if "functional-836309" exists ...
I0917 00:17:41.124975  583280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0917 00:17:41.185497  583280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-17 00:17:41.174021254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0917 00:17:41.185647  583280 api_server.go:166] Checking apiserver status ...
I0917 00:17:41.185726  583280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0917 00:17:41.185794  583280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
I0917 00:17:41.209372  583280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
I0917 00:17:41.312881  583280 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5553/cgroup
W0917 00:17:41.323849  583280 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5553/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0917 00:17:41.323909  583280 ssh_runner.go:195] Run: ls
I0917 00:17:41.328059  583280 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0917 00:17:41.333985  583280 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0917 00:17:41.334065  583280 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0917 00:17:41.334291  583280 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:17:41.334330  583280 addons.go:69] Setting dashboard=true in profile "functional-836309"
I0917 00:17:41.334349  583280 addons.go:238] Setting addon dashboard=true in "functional-836309"
I0917 00:17:41.334384  583280 host.go:66] Checking if "functional-836309" exists ...
I0917 00:17:41.334885  583280 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
I0917 00:17:41.356426  583280 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0917 00:17:41.357898  583280 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0917 00:17:41.359191  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0917 00:17:41.359215  583280 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0917 00:17:41.359305  583280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
I0917 00:17:41.378819  583280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
I0917 00:17:41.488683  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0917 00:17:41.488710  583280 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0917 00:17:41.508439  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0917 00:17:41.508473  583280 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0917 00:17:41.530438  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0917 00:17:41.530480  583280 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0917 00:17:41.552119  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0917 00:17:41.552146  583280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0917 00:17:41.572807  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0917 00:17:41.572844  583280 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0917 00:17:41.593592  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0917 00:17:41.593627  583280 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0917 00:17:41.614634  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0917 00:17:41.614661  583280 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0917 00:17:41.634946  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0917 00:17:41.634975  583280 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0917 00:17:41.655092  583280 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0917 00:17:41.655117  583280 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0917 00:17:41.674552  583280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0917 00:17:42.132624  583280 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-836309 addons enable metrics-server

                                                
                                                
I0917 00:17:42.134106  583280 addons.go:201] Writing out "functional-836309" config to set dashboard=true...
W0917 00:17:42.134364  583280 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0917 00:17:42.135054  583280 kapi.go:59] client config for functional-836309: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0917 00:17:42.135703  583280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0917 00:17:42.135726  583280 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0917 00:17:42.135734  583280 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0917 00:17:42.135741  583280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0917 00:17:42.135750  583280 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0917 00:17:42.145128  583280 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  4684ad18-99d2-46ed-b84c-666eceff12f0 1204 0 2025-09-17 00:17:42 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-17 00:17:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.15.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.15.120],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0917 00:17:42.145303  583280 out.go:285] * Launching proxy ...
* Launching proxy ...
I0917 00:17:42.145378  583280 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-836309 proxy --port 36195]
I0917 00:17:42.145753  583280 dashboard.go:157] Waiting for kubectl to output host:port ...
I0917 00:17:42.191568  583280 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0917 00:17:42.191801  583280 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0917 00:17:42.200523  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6dc97ae1-d1f5-4e7d-9cd6-3ae3fe837e32] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00081f480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a08c0 TLS:<nil>}
I0917 00:17:42.200626  583280 retry.go:31] will retry after 70.103µs: Temporary Error: unexpected response code: 503
I0917 00:17:42.204245  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[91af82e7-0390-486f-b564-c179d9f70fae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6000 TLS:<nil>}
I0917 00:17:42.204297  583280 retry.go:31] will retry after 160.331µs: Temporary Error: unexpected response code: 503
I0917 00:17:42.207454  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96999a3c-e623-4e80-82b1-d276a33a1955] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00081f5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a0a00 TLS:<nil>}
I0917 00:17:42.207502  583280 retry.go:31] will retry after 154.305µs: Temporary Error: unexpected response code: 503
I0917 00:17:42.210575  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a1394c1e-5a76-49ae-9ecf-a74af1d2d4c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I0917 00:17:42.210613  583280 retry.go:31] will retry after 262.139µs: Temporary Error: unexpected response code: 503
I0917 00:17:42.213844  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1a3a0762-9500-4cf8-bd9c-08fc4e6f11d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00081f6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a0b40 TLS:<nil>}
I0917 00:17:42.213898  583280 retry.go:31] will retry after 407.76µs: Temporary Error: unexpected response code: 503
I0917 00:17:42.217460  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3bf97ff2-ad67-443d-86fe-ad56cc1843c0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696000 TLS:<nil>}
I0917 00:17:42.217516  583280 retry.go:31] will retry after 1.063859ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.221190  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5f2977cc-5d11-4172-a0d9-5888088f33e0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a0c80 TLS:<nil>}
I0917 00:17:42.221242  583280 retry.go:31] will retry after 1.419089ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.225830  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[48868189-7cc4-494d-bcce-a62a15a89767] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00090ae80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a0f00 TLS:<nil>}
I0917 00:17:42.225889  583280 retry.go:31] will retry after 1.047125ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.229281  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2e8bb5f-c150-4a79-93b0-02ee3d65d9e0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00081f7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I0917 00:17:42.229333  583280 retry.go:31] will retry after 2.905642ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.235490  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd4e49a9-5e16-4384-942b-d12f030e9dfd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00090afc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696140 TLS:<nil>}
I0917 00:17:42.235547  583280 retry.go:31] will retry after 1.987519ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.240147  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e81572f1-3051-49e4-a806-fde48e0c307c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I0917 00:17:42.240194  583280 retry.go:31] will retry after 7.55335ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.251046  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b724a200-ecc6-4903-96f3-8a08f3eeaf86] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00090b0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a1540 TLS:<nil>}
I0917 00:17:42.251125  583280 retry.go:31] will retry after 8.431609ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.262802  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[117a30fe-9a3f-4e3f-bcdc-ced30f9db4df] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I0917 00:17:42.262867  583280 retry.go:31] will retry after 8.559843ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.274649  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7f6d8701-7c6e-4e00-83e9-24a9a8b8b967] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b89c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a1680 TLS:<nil>}
I0917 00:17:42.274753  583280 retry.go:31] will retry after 23.343102ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.302084  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bfcd7288-27bb-4aa6-8ac3-b645e448c309] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00090b180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a17c0 TLS:<nil>}
I0917 00:17:42.302149  583280 retry.go:31] will retry after 35.091867ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.340546  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7e1d78b-d803-4df6-8f1d-2e9f2b18ea52] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00081f940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I0917 00:17:42.340612  583280 retry.go:31] will retry after 33.483229ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.377752  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2bcd335c-f824-4119-817f-79f3ea813ab6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696280 TLS:<nil>}
I0917 00:17:42.377848  583280 retry.go:31] will retry after 42.618656ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.423916  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[013bb3af-06d8-4b56-a849-e3b81596aa4e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00090b2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a1900 TLS:<nil>}
I0917 00:17:42.423999  583280 retry.go:31] will retry after 72.177484ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.499135  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[51a4abda-e934-44a0-ad9c-e4c9787be8b7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I0917 00:17:42.499208  583280 retry.go:31] will retry after 154.929968ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.657802  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a842af1f-4bcf-4298-90b7-bd3b337a439a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc00014de00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a1a40 TLS:<nil>}
I0917 00:17:42.657883  583280 retry.go:31] will retry after 173.949837ms: Temporary Error: unexpected response code: 503
I0917 00:17:42.835464  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a46e9b0-1d3d-4080-8c62-f535bdf18f29] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:42 GMT]] Body:0xc0008b8c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I0917 00:17:42.835534  583280 retry.go:31] will retry after 431.058889ms: Temporary Error: unexpected response code: 503
I0917 00:17:43.270262  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[662841fe-bb97-41f3-96ad-7fb274951e28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:43 GMT]] Body:0xc00038cb00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a1b80 TLS:<nil>}
I0917 00:17:43.270336  583280 retry.go:31] will retry after 501.856011ms: Temporary Error: unexpected response code: 503
I0917 00:17:43.776208  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddc36532-c667-4ddb-8f1e-65ae943fe45b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:43 GMT]] Body:0xc00081fa80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I0917 00:17:43.776278  583280 retry.go:31] will retry after 846.457542ms: Temporary Error: unexpected response code: 503
I0917 00:17:44.627255  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e360a4e-6853-49d9-ba0a-5b791a9af140] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:44 GMT]] Body:0xc000099a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016963c0 TLS:<nil>}
I0917 00:17:44.627326  583280 retry.go:31] will retry after 668.378023ms: Temporary Error: unexpected response code: 503
I0917 00:17:45.299653  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ead10ecf-b82b-40ee-9f17-d47b976c4bb0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:45 GMT]] Body:0xc000250e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I0917 00:17:45.299730  583280 retry.go:31] will retry after 1.115592548s: Temporary Error: unexpected response code: 503
I0917 00:17:46.419324  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7a015d9-a0e1-462d-aa8c-1f8e0285b874] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:46 GMT]] Body:0xc00081fbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I0917 00:17:46.419429  583280 retry.go:31] will retry after 2.953264707s: Temporary Error: unexpected response code: 503
I0917 00:17:49.378489  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[371b9f9c-4637-4f06-9a6b-a60d71352bc9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:49 GMT]] Body:0xc00081fc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696500 TLS:<nil>}
I0917 00:17:49.378583  583280 retry.go:31] will retry after 2.157137226s: Temporary Error: unexpected response code: 503
I0917 00:17:51.540663  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e278716b-fd1e-456f-906b-b82e2227fda6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:51 GMT]] Body:0xc0008b8d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207e00 TLS:<nil>}
I0917 00:17:51.540734  583280 retry.go:31] will retry after 6.876794332s: Temporary Error: unexpected response code: 503
I0917 00:17:58.422374  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0bf9502d-2cb6-472e-8f4d-3c8524c1c9cf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:17:58 GMT]] Body:0xc00081fd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a1cc0 TLS:<nil>}
I0917 00:17:58.422485  583280 retry.go:31] will retry after 12.52525097s: Temporary Error: unexpected response code: 503
I0917 00:18:10.955589  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fc858075-15a1-4f58-b806-9d3fe99e34fb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:18:10 GMT]] Body:0xc000251500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a1e00 TLS:<nil>}
I0917 00:18:10.955656  583280 retry.go:31] will retry after 9.449087143s: Temporary Error: unexpected response code: 503
I0917 00:18:20.408761  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a08141ac-5cd1-4faa-9954-67822d09bd15] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:18:20 GMT]] Body:0xc00081fdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00164e000 TLS:<nil>}
I0917 00:18:20.408843  583280 retry.go:31] will retry after 21.762539428s: Temporary Error: unexpected response code: 503
I0917 00:18:42.175357  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a71234fc-34e3-4d9d-bc73-c7f1d3d285b7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:18:42 GMT]] Body:0xc0008b8ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696640 TLS:<nil>}
I0917 00:18:42.175441  583280 retry.go:31] will retry after 42.776347322s: Temporary Error: unexpected response code: 503
I0917 00:19:24.956286  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9fff4d0d-614b-48a0-abf5-4ed92b771322] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:19:24 GMT]] Body:0xc000251680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001696780 TLS:<nil>}
I0917 00:19:24.956365  583280 retry.go:31] will retry after 33.973563023s: Temporary Error: unexpected response code: 503
I0917 00:19:58.935954  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7b130f7-e54c-499f-a3bc-d2caebb68521] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:19:58 GMT]] Body:0xc0008b8200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000698280 TLS:<nil>}
I0917 00:19:58.936053  583280 retry.go:31] will retry after 1m18.096021435s: Temporary Error: unexpected response code: 503
I0917 00:21:17.035859  583280 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28bc9b05-66e9-4973-aa59-7e78a5ce812d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:21:17 GMT]] Body:0xc00081eac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0008a0000 TLS:<nil>}
I0917 00:21:17.035948  583280 retry.go:31] will retry after 1m27.818808998s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-836309
helpers_test.go:243: (dbg) docker inspect functional-836309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	        "Created": "2025-09-17T00:09:44.133139993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564972,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:09:44.169133569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hosts",
	        "LogPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5-json.log",
	        "Name": "/functional-836309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-836309:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-836309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	                "LowerDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-836309",
	                "Source": "/var/lib/docker/volumes/functional-836309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-836309",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-836309",
	                "name.minikube.sigs.k8s.io": "functional-836309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23448c026a24457ded735e88238de72a95f1b2d956a93efb7f9494b958befb64",
	            "SandboxKey": "/var/run/docker/netns/23448c026a24",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-836309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:01:e3:2b:98:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f11c0adeed5b0a571ce66bcfa96404e5751f9da2bd5366531798e16160202bd2",
	                    "EndpointID": "47b04d28f82bdaef821c6f0a8dc045f3604bb616ac73b4ea262d9bb6aa905794",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-836309",
	                        "3ec3e877de9b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-836309 -n functional-836309
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 logs -n 25: (1.545223002s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-836309 ssh cat /etc/hostname                                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ tunnel         │ functional-836309 tunnel --alsologtostderr                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel         │ functional-836309 tunnel --alsologtostderr                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel         │ functional-836309 tunnel --alsologtostderr                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ addons         │ functional-836309 addons list                                                                             │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │ 17 Sep 25 00:17 UTC │
	│ addons         │ functional-836309 addons list -o json                                                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │ 17 Sep 25 00:17 UTC │
	│ start          │ -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ start          │ -p functional-836309 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ start          │ -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-836309 --alsologtostderr -v=1                                            │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ service        │ functional-836309 service list                                                                            │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ service        │ functional-836309 service list -o json                                                                    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ service        │ functional-836309 service --namespace=default --https --url hello-node                                    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ service        │ functional-836309 service hello-node --url --format={{.IP}}                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ service        │ functional-836309 service hello-node --url                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ update-context │ functional-836309 update-context --alsologtostderr -v=2                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ update-context │ functional-836309 update-context --alsologtostderr -v=2                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ update-context │ functional-836309 update-context --alsologtostderr -v=2                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format short --alsologtostderr                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format yaml --alsologtostderr                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ ssh            │ functional-836309 ssh pgrep buildkitd                                                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ image          │ functional-836309 image build -t localhost/my-image:functional-836309 testdata/build --alsologtostderr    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format json --alsologtostderr                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format table --alsologtostderr                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls                                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:17:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:17:40.936845  583199 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:17:40.936953  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.936960  583199 out.go:374] Setting ErrFile to fd 2...
	I0917 00:17:40.936966  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.937339  583199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:17:40.937877  583199 out.go:368] Setting JSON to false
	I0917 00:17:40.938867  583199 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10804,"bootTime":1758057457,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:17:40.938993  583199 start.go:140] virtualization: kvm guest
	I0917 00:17:40.941492  583199 out.go:179] * [functional-836309] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0917 00:17:40.944227  583199 notify.go:220] Checking for updates...
	I0917 00:17:40.944335  583199 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:17:40.946765  583199 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:17:40.948295  583199 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:17:40.949696  583199 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:17:40.951158  583199 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:17:40.952856  583199 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:17:40.955046  583199 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:17:40.955588  583199 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:17:40.980713  583199 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:17:40.980830  583199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:17:41.040600  583199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-17 00:17:41.029871976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:17:41.040710  583199 docker.go:318] overlay module found
	I0917 00:17:41.043008  583199 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0917 00:17:41.045273  583199 start.go:304] selected driver: docker
	I0917 00:17:41.045298  583199 start.go:918] validating driver "docker" against &{Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:17:41.045421  583199 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:17:41.048155  583199 out.go:203] 
	W0917 00:17:41.049889  583199 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 00:17:41.051309  583199 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 00:21:54 functional-836309 crio[4225]: time="2025-09-17 00:21:54.538015549Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1af8a14a-6ace-4d81-a90d-d792af673ad0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:54 functional-836309 crio[4225]: time="2025-09-17 00:21:54.538267016Z" level=info msg="Image docker.io/nginx:alpine not found" id=1af8a14a-6ace-4d81-a90d-d792af673ad0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:55 functional-836309 crio[4225]: time="2025-09-17 00:21:55.538631557Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=75d9a59b-6580-4dd6-9577-a17c0473c01f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:55 functional-836309 crio[4225]: time="2025-09-17 00:21:55.538952270Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=75d9a59b-6580-4dd6-9577-a17c0473c01f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:56 functional-836309 crio[4225]: time="2025-09-17 00:21:56.538959779Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=54bb72c9-ba5f-41b0-a599-b30b6c2c7db7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:56 functional-836309 crio[4225]: time="2025-09-17 00:21:56.539246285Z" level=info msg="Image docker.io/mysql:5.7 not found" id=54bb72c9-ba5f-41b0-a599-b30b6c2c7db7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:06 functional-836309 crio[4225]: time="2025-09-17 00:22:06.538734619Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=837e4c5a-6577-498e-bdf8-7e05c7a9987b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:06 functional-836309 crio[4225]: time="2025-09-17 00:22:06.539039832Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=837e4c5a-6577-498e-bdf8-7e05c7a9987b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:09 functional-836309 crio[4225]: time="2025-09-17 00:22:09.538520499Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b3e9388c-d4fe-4db6-b085-65f33a134783 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:09 functional-836309 crio[4225]: time="2025-09-17 00:22:09.538798016Z" level=info msg="Image docker.io/nginx:alpine not found" id=b3e9388c-d4fe-4db6-b085-65f33a134783 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:13 functional-836309 crio[4225]: time="2025-09-17 00:22:13.297962663Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=30834eed-135b-4e9d-bbd5-abe2be6ce06d name=/runtime.v1.ImageService/PullImage
	Sep 17 00:22:13 functional-836309 crio[4225]: time="2025-09-17 00:22:13.298776021Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=c9ec781d-f1ea-4ecc-9e16-bd7f1331c315 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:22:13 functional-836309 crio[4225]: time="2025-09-17 00:22:13.303945757Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 17 00:22:21 functional-836309 crio[4225]: time="2025-09-17 00:22:21.538936629Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=07b13018-8a51-4205-af0c-cb0fd769741c name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:21 functional-836309 crio[4225]: time="2025-09-17 00:22:21.539269000Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=07b13018-8a51-4205-af0c-cb0fd769741c name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:24 functional-836309 crio[4225]: time="2025-09-17 00:22:24.538676232Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=884826e2-7600-4beb-bb89-2f65f2200e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:24 functional-836309 crio[4225]: time="2025-09-17 00:22:24.538973931Z" level=info msg="Image docker.io/nginx:alpine not found" id=884826e2-7600-4beb-bb89-2f65f2200e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:25 functional-836309 crio[4225]: time="2025-09-17 00:22:25.538273647Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ff9c8df0-17e9-4d9a-bf50-33094ad81dde name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:25 functional-836309 crio[4225]: time="2025-09-17 00:22:25.538615794Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=ff9c8df0-17e9-4d9a-bf50-33094ad81dde name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:33 functional-836309 crio[4225]: time="2025-09-17 00:22:33.538647604Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=4f491fc9-9972-4762-9a57-899951fb8037 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:33 functional-836309 crio[4225]: time="2025-09-17 00:22:33.538948556Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=4f491fc9-9972-4762-9a57-899951fb8037 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:39 functional-836309 crio[4225]: time="2025-09-17 00:22:39.538708003Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a16f23f9-3635-4cdd-8ec1-4b4bbfc6afbb name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:39 functional-836309 crio[4225]: time="2025-09-17 00:22:39.539022040Z" level=info msg="Image docker.io/nginx:alpine not found" id=a16f23f9-3635-4cdd-8ec1-4b4bbfc6afbb name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:40 functional-836309 crio[4225]: time="2025-09-17 00:22:40.538547525Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c287ab54-73e2-4a7a-87c2-a6c79bbfc1d2 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:22:40 functional-836309 crio[4225]: time="2025-09-17 00:22:40.538901147Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c287ab54-73e2-4a7a-87c2-a6c79bbfc1d2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb474edf243b1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   d689b11bc9243       busybox-mount
	9f2aad7cc830a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      11 minutes ago      Running             kube-apiserver            0                   cb31a6d151f18       kube-apiserver-functional-836309
	8fc6aae6af439       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Running             kube-controller-manager   2                   073e9000e2cbd       kube-controller-manager-functional-836309
	a14ceabc188eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Running             etcd                      1                   bd997b17bb8d3       etcd-functional-836309
	888d62ee0b634       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      11 minutes ago      Exited              kube-controller-manager   1                   073e9000e2cbd       kube-controller-manager-functional-836309
	c06f60831d1a2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Running             kube-proxy                1                   04529c3273474       kube-proxy-cbvjf
	64858777ddc03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Running             kindnet-cni               1                   e619e5a0562ff       kindnet-h2rjf
	8414e6a217a0a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Running             kube-scheduler            1                   c5ca55e367f9f       kube-scheduler-functional-836309
	8750ce41941ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       1                   9bd06274bf9f1       storage-provisioner
	9d874bdc79320       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 minutes ago      Running             coredns                   1                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	43960daf0ceb5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 minutes ago      Exited              coredns                   0                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	fee9c2e341d4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Exited              storage-provisioner       0                   9bd06274bf9f1       storage-provisioner
	94e0331fcf046       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      12 minutes ago      Exited              kindnet-cni               0                   e619e5a0562ff       kindnet-h2rjf
	2590bb5313e64       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      12 minutes ago      Exited              kube-proxy                0                   04529c3273474       kube-proxy-cbvjf
	fd4423f996e17       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      12 minutes ago      Exited              kube-scheduler            0                   c5ca55e367f9f       kube-scheduler-functional-836309
	66e1997c75a09       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      12 minutes ago      Exited              etcd                      0                   bd997b17bb8d3       etcd-functional-836309
	
	
	==> coredns [43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58276 - 22452 "HINFO IN 7807615287491316741.4205491171577213210. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036670075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d874bdc7932076f658b9567185beccffdb2e85d489d293dfe85e3e619013c1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34900 - 1175 "HINFO IN 6559932629016620651.4444246566734803126. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054012876s
	
	
	==> describe nodes <==
	Name:               functional-836309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-836309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=functional-836309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_09_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:09:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-836309
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:22:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:22:07 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:22:07 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:22:07 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:22:07 +0000   Wed, 17 Sep 2025 00:10:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-836309
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f7de0bcecd43499ea9b16c8c00a864
	  System UUID:                e097105d-a213-4ebf-95fe-cce4cad422c0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-m76kz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-54xkq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  default                     mysql-5bb876957f-l9pq7                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     11m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-zvmqf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-836309                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-h2rjf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-836309              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-836309     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cbvjf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-836309              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-htbkl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lm4gk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	  Normal  NodeReady                12m                kubelet          Node functional-836309 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d] <==
	{"level":"warn","ts":"2025-09-17T00:09:55.212910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.220259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.227159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.234529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.243853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.251054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.257902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:10:42.783237Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:10:42.783351Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-17T00:10:42.783494Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785304Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785881Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785904Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785429Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785929Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785969Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.785982Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.788632Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:10:49.788702Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.788727Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:10:49.788733Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a14ceabc188ebbf10535dda7c1f798592d2e79e03743ad28e2bd444ce75333ba] <==
	{"level":"warn","ts":"2025-09-17T00:11:02.777282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.783883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.791702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.799199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.806444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.812694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.819824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.828034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.834969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.841980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.849538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.863753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.870164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.878044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.884349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.890622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.898140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.905536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.912507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.926102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.939007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.982536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:21:02.483178Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1020}
	{"level":"info","ts":"2025-09-17T00:21:02.502430Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1020,"took":"18.848289ms","hash":1664117828,"current-db-size-bytes":3403776,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-09-17T00:21:02.502483Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1664117828,"revision":1020,"compact-revision":-1}
	
	
	==> kernel <==
	 00:22:42 up  3:05,  0 users,  load average: 0.34, 0.43, 8.43
	Linux functional-836309 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [64858777ddc0357994b52a6fd8bf79dba5ac39143453505e0f08e2a242aecae8] <==
	I0917 00:20:33.716298       1 main.go:301] handling current node
	I0917 00:20:43.716421       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:43.716476       1 main.go:301] handling current node
	I0917 00:20:53.725170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:53.725212       1 main.go:301] handling current node
	I0917 00:21:03.717539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:03.717585       1 main.go:301] handling current node
	I0917 00:21:13.716959       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:13.717006       1 main.go:301] handling current node
	I0917 00:21:23.717031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:23.717085       1 main.go:301] handling current node
	I0917 00:21:33.717247       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:33.717297       1 main.go:301] handling current node
	I0917 00:21:43.716251       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:43.716300       1 main.go:301] handling current node
	I0917 00:21:53.722506       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:53.722554       1 main.go:301] handling current node
	I0917 00:22:03.722488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:22:03.722539       1 main.go:301] handling current node
	I0917 00:22:13.717213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:22:13.717264       1 main.go:301] handling current node
	I0917 00:22:23.716527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:22:23.716599       1 main.go:301] handling current node
	I0917 00:22:33.716969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:22:33.717045       1 main.go:301] handling current node
	
	
	==> kindnet [94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177] <==
	I0917 00:10:04.407562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0917 00:10:04.407829       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0917 00:10:04.407974       1 main.go:148] setting mtu 1500 for CNI 
	I0917 00:10:04.407992       1 main.go:178] kindnetd IP family: "ipv4"
	I0917 00:10:04.408041       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-17T00:10:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0917 00:10:04.608241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0917 00:10:04.608325       1 controller.go:381] "Waiting for informer caches to sync"
	I0917 00:10:04.608338       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0917 00:10:04.608850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0917 00:10:05.008798       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0917 00:10:05.008823       1 metrics.go:72] Registering metrics
	I0917 00:10:05.008870       1 controller.go:711] "Syncing nftables rules"
	I0917 00:10:14.613627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:14.613697       1 main.go:301] handling current node
	I0917 00:10:24.615570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:24.615608       1 main.go:301] handling current node
	I0917 00:10:34.612524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:34.612559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f2aad7cc830a3ec57ba1b3d2cd335c4f402ff995fba44cd8dd9944ea36855bb] <==
	I0917 00:11:25.122372       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.76.119"}
	I0917 00:11:27.305503       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.76.206"}
	I0917 00:12:08.543295       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.9.127"}
	I0917 00:12:16.483422       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:26.931027       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:26.407498       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:39.931798       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:38.574091       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:09.009462       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:47.255026       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:24.666189       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:12.893722       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:41.988170       1 controller.go:667] quota admission added evaluator for: namespaces
	I0917 00:17:42.111550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.15.120"}
	I0917 00:17:42.123959       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.205.187"}
	I0917 00:17:49.333363       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:56.710543       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.163.232"}
	I0917 00:18:40.466273       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:19:10.646833       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:20:06.755790       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:20:15.654078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:03.383145       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:21:15.681088       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:44.816934       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:22:16.629730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [888d62ee0b634c673d1878ce150c6f0034e298592a41de5b4a133d003db1a139] <==
	I0917 00:10:43.989698       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:44.308654       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0917 00:10:44.308686       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:44.310251       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 00:10:44.310301       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:10:44.310653       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0917 00:10:44.310800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 00:10:56.321004       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8fc6aae6af439080e3411b9cb8143eddc1da6c5a6e3211c2a191a3dbfa865ca9] <==
	I0917 00:11:06.793750       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:11:06.793797       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0917 00:11:06.795031       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0917 00:11:06.795086       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:11:06.795122       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:11:06.795131       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:11:06.795137       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:11:06.795175       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:11:06.795208       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:11:06.797152       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:11:06.798500       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:11:06.800827       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:11:06.800851       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:11:06.800859       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:11:06.800834       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:11:06.803222       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:11:06.805177       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:11:06.807633       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:11:06.816406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:17:42.036751       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.041055       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.045573       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.045609       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.049857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.055310       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485] <==
	I0917 00:10:04.193311       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:04.263769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:04.364709       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:04.364767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:04.364855       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:04.385096       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:04.385159       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:04.390876       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:04.391486       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:04.391511       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:04.393121       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:04.393158       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:04.393167       1 config.go:200] "Starting service config controller"
	I0917 00:10:04.393187       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:04.393201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:04.393189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:04.393246       1 config.go:309] "Starting node config controller"
	I0917 00:10:04.393260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:04.493462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:04.493439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c06f60831d1a27beead1133ee09bd56597eea7ed1a44bd377eb0a2445447cee8] <==
	I0917 00:10:43.389590       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:43.460712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:43.561820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:43.561866       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:43.561957       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:43.585276       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:43.585350       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:43.590785       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:43.591164       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:43.591200       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:43.593011       1 config.go:200] "Starting service config controller"
	I0917 00:10:43.593356       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:43.593113       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:43.593126       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:43.593435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:43.593437       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:43.593165       1 config.go:309] "Starting node config controller"
	I0917 00:10:43.593494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:43.593503       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:43.693526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:43.693578       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:10:43.693636       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8414e6a217a0a65711aa4a8781ace6ed51c30407bf0166b9c4024dad4b506e9c] <==
	I0917 00:10:44.134044       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:51.491622       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:10:51.491651       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:51.496210       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496222       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:10:51.496254       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496251       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496256       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.496635       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:10:51.496706       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:10:51.596824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.597020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.597094       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:11:03.387571       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:11:03.387692       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:11:03.387722       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:11:03.387745       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:11:03.387764       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:11:03.387800       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	
	
	==> kube-scheduler [fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306] <==
	E0917 00:09:56.365834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:09:56.365882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:09:56.366012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:09:56.366067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:09:56.366104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0917 00:09:56.366185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:09:56.366277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:09:56.366176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0917 00:09:56.366533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:09:56.366612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:09:56.366642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:09:56.366681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:09:56.366732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:09:56.366735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0917 00:09:56.366804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:09:56.366825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0917 00:09:56.366896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0917 00:09:56.366939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0917 00:09:57.962884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.641974       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:10:42.642087       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.642285       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:10:42.642311       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:10:42.642328       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:10:42.642359       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.297489    5462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.297565    5462 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.297841    5462 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-lm4gk_kubernetes-dashboard(3f7e653f-cd38-4dd9-8d08-5632496af8f8): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.297904    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lm4gk" podUID="3f7e653f-cd38-4dd9-8d08-5632496af8f8"
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.298374    5462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.298432    5462 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.298626    5462 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-54xkq_default(2d5c821a-47c0-4488-b33d-e43b5a07a2f0): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 17 00:22:13 functional-836309 kubelet[5462]: E0917 00:22:13.299927    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-54xkq" podUID="2d5c821a-47c0-4488-b33d-e43b5a07a2f0"
	Sep 17 00:22:14 functional-836309 kubelet[5462]: E0917 00:22:14.538204    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:22:21 functional-836309 kubelet[5462]: E0917 00:22:21.539710    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-htbkl" podUID="a547af32-a08d-4709-9ee2-63f12a40647a"
	Sep 17 00:22:21 functional-836309 kubelet[5462]: E0917 00:22:21.657957    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068541657692815  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:22:21 functional-836309 kubelet[5462]: E0917 00:22:21.658000    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068541657692815  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:22:22 functional-836309 kubelet[5462]: E0917 00:22:22.538582    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:22:24 functional-836309 kubelet[5462]: E0917 00:22:24.539330    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:22:25 functional-836309 kubelet[5462]: E0917 00:22:25.537776    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-54xkq" podUID="2d5c821a-47c0-4488-b33d-e43b5a07a2f0"
	Sep 17 00:22:25 functional-836309 kubelet[5462]: E0917 00:22:25.538910    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lm4gk" podUID="3f7e653f-cd38-4dd9-8d08-5632496af8f8"
	Sep 17 00:22:27 functional-836309 kubelet[5462]: E0917 00:22:27.537935    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:22:31 functional-836309 kubelet[5462]: E0917 00:22:31.659460    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068551659186245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:22:31 functional-836309 kubelet[5462]: E0917 00:22:31.659506    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068551659186245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:22:39 functional-836309 kubelet[5462]: E0917 00:22:39.538677    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-54xkq" podUID="2d5c821a-47c0-4488-b33d-e43b5a07a2f0"
	Sep 17 00:22:39 functional-836309 kubelet[5462]: E0917 00:22:39.539335    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:22:40 functional-836309 kubelet[5462]: E0917 00:22:40.539315    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lm4gk" podUID="3f7e653f-cd38-4dd9-8d08-5632496af8f8"
	Sep 17 00:22:41 functional-836309 kubelet[5462]: E0917 00:22:41.661276    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068561660966233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:22:41 functional-836309 kubelet[5462]: E0917 00:22:41.661333    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068561660966233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:22:42 functional-836309 kubelet[5462]: E0917 00:22:42.538294    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	
	
	==> storage-provisioner [8750ce41941ba15a9b4b2e19cfe5128979331c1400a49209e1f4efb5b1318340] <==
	W0917 00:22:18.353612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:20.356688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:20.360796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:22.364595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:22.368803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:24.372193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:24.376716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:26.380935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:26.385908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:28.389798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:28.394499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:30.398059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:30.402321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:32.405921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:32.411736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:34.415290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:34.420327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:36.424093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:36.428137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:38.431859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:38.437691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:40.441179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:40.446491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:42.449907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:22:42.454923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f] <==
	W0917 00:10:17.462196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.465941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.471590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.475172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.479508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.483478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.491192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.495638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.501797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.506026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.512276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.515329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.519407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.522663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.529122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.532130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.536263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.539874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.544694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.549064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.553478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.557571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.563110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.566878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.571434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
helpers_test.go:269: (dbg) Run:  kubectl --context functional-836309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk: exit status 1 (114.815073ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://cb474edf243b1a8e4e93b368e7e6be5f76c0c8b839e74e1c49c1a7bff20a0680
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Sep 2025 00:12:00 +0000
	      Finished:     Wed, 17 Sep 2025 00:12:00 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvp4d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zvp4d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/busybox-mount to functional-836309
	  Normal  Pulling    11m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.264s (28.084s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-m76kz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:25 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4fhc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c4fhc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-m76kz to functional-836309
	  Normal   Pulling    6m12s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     5m42s (x5 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     5m42s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    77s (x24 over 11m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     77s (x24 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-54xkq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:17:56 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ldx8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9ldx8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m46s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-54xkq to functional-836309
	  Normal   Pulling    91s (x3 over 4m47s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     30s (x3 over 4m1s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     30s (x3 over 4m1s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x4 over 4m1s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x4 over 4m1s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-l9pq7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-76bnk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-76bnk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  11m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-l9pq7 to functional-836309
	  Normal   Pulling    4m46s (x5 over 11m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m31s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m26s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    70s (x22 over 10m)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     0s (x6 over 10m)      kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:12:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2v8fx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2v8fx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/nginx-svc to functional-836309
	  Normal   Pulling    3m43s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     90s (x5 over 9m43s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x5 over 9m43s)   kubelet            Error: ErrImagePull
	  Warning  Failed     19s (x16 over 9m43s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x17 over 9m43s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:37 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85lfd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-85lfd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/sp-pod to functional-836309
	  Normal   Pulling    4m18s (x5 over 11m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m1s (x5 over 10m)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m1s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m3s (x16 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    48s (x22 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-htbkl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lm4gk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk: exit status 1
E0917 00:25:14.436435  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:26:37.505909  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-836309 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-836309 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-54xkq" [2d5c821a-47c0-4488-b33d-e43b5a07a2f0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0917 00:20:14.436614  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-17 00:27:57.047781917 +0000 UTC m=+2384.515125495
functional_test.go:1645: (dbg) Run:  kubectl --context functional-836309 describe po hello-node-connect-7d85dfc575-54xkq -n default
functional_test.go:1645: (dbg) kubectl --context functional-836309 describe po hello-node-connect-7d85dfc575-54xkq -n default:
Name:             hello-node-connect-7d85dfc575-54xkq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-836309/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:17:56 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ldx8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9ldx8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-54xkq to functional-836309
Normal   Pulling    2m14s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     112s (x5 over 9m15s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     112s (x5 over 9m15s)  kubelet            Error: ErrImagePull
Warning  Failed     45s (x16 over 9m15s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    5s (x19 over 9m15s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-836309 logs hello-node-connect-7d85dfc575-54xkq -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-836309 logs hello-node-connect-7d85dfc575-54xkq -n default: exit status 1 (73.868125ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-54xkq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-836309 logs hello-node-connect-7d85dfc575-54xkq -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-836309 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-54xkq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-836309/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:17:56 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ldx8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9ldx8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-54xkq to functional-836309
Normal   Pulling    2m14s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     112s (x5 over 9m15s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     112s (x5 over 9m15s)  kubelet            Error: ErrImagePull
Warning  Failed     45s (x16 over 9m15s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    5s (x19 over 9m15s)   kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-836309 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-836309 logs -l app=hello-node-connect: exit status 1 (65.607905ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-54xkq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-836309 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-836309 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.163.232
IPs:                      10.96.163.232
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30344/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-836309
helpers_test.go:243: (dbg) docker inspect functional-836309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	        "Created": "2025-09-17T00:09:44.133139993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564972,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:09:44.169133569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hosts",
	        "LogPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5-json.log",
	        "Name": "/functional-836309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-836309:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-836309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	                "LowerDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-836309",
	                "Source": "/var/lib/docker/volumes/functional-836309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-836309",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-836309",
	                "name.minikube.sigs.k8s.io": "functional-836309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23448c026a24457ded735e88238de72a95f1b2d956a93efb7f9494b958befb64",
	            "SandboxKey": "/var/run/docker/netns/23448c026a24",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-836309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:01:e3:2b:98:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f11c0adeed5b0a571ce66bcfa96404e5751f9da2bd5366531798e16160202bd2",
	                    "EndpointID": "47b04d28f82bdaef821c6f0a8dc045f3604bb616ac73b4ea262d9bb6aa905794",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-836309",
	                        "3ec3e877de9b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-836309 -n functional-836309
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 logs -n 25: (1.541883346s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-836309 ssh cat /etc/hostname                                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ tunnel         │ functional-836309 tunnel --alsologtostderr                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel         │ functional-836309 tunnel --alsologtostderr                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel         │ functional-836309 tunnel --alsologtostderr                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ addons         │ functional-836309 addons list                                                                             │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │ 17 Sep 25 00:17 UTC │
	│ addons         │ functional-836309 addons list -o json                                                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │ 17 Sep 25 00:17 UTC │
	│ start          │ -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ start          │ -p functional-836309 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ start          │ -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-836309 --alsologtostderr -v=1                                            │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ service        │ functional-836309 service list                                                                            │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ service        │ functional-836309 service list -o json                                                                    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ service        │ functional-836309 service --namespace=default --https --url hello-node                                    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ service        │ functional-836309 service hello-node --url --format={{.IP}}                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ service        │ functional-836309 service hello-node --url                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ update-context │ functional-836309 update-context --alsologtostderr -v=2                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ update-context │ functional-836309 update-context --alsologtostderr -v=2                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ update-context │ functional-836309 update-context --alsologtostderr -v=2                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format short --alsologtostderr                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format yaml --alsologtostderr                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ ssh            │ functional-836309 ssh pgrep buildkitd                                                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	│ image          │ functional-836309 image build -t localhost/my-image:functional-836309 testdata/build --alsologtostderr    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format json --alsologtostderr                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls --format table --alsologtostderr                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image          │ functional-836309 image ls                                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:17:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:17:40.936845  583199 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:17:40.936953  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.936960  583199 out.go:374] Setting ErrFile to fd 2...
	I0917 00:17:40.936966  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.937339  583199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:17:40.937877  583199 out.go:368] Setting JSON to false
	I0917 00:17:40.938867  583199 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10804,"bootTime":1758057457,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:17:40.938993  583199 start.go:140] virtualization: kvm guest
	I0917 00:17:40.941492  583199 out.go:179] * [functional-836309] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0917 00:17:40.944227  583199 notify.go:220] Checking for updates...
	I0917 00:17:40.944335  583199 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:17:40.946765  583199 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:17:40.948295  583199 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:17:40.949696  583199 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:17:40.951158  583199 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:17:40.952856  583199 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:17:40.955046  583199 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:17:40.955588  583199 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:17:40.980713  583199 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:17:40.980830  583199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:17:41.040600  583199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-17 00:17:41.029871976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:17:41.040710  583199 docker.go:318] overlay module found
	I0917 00:17:41.043008  583199 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0917 00:17:41.045273  583199 start.go:304] selected driver: docker
	I0917 00:17:41.045298  583199 start.go:918] validating driver "docker" against &{Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:17:41.045421  583199 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:17:41.048155  583199 out.go:203] 
	W0917 00:17:41.049889  583199 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 00:17:41.051309  583199 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 00:27:20 functional-836309 crio[4225]: time="2025-09-17 00:27:20.538512665Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=c70a7129-9b2b-42a1-901c-625d07470f2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:23 functional-836309 crio[4225]: time="2025-09-17 00:27:23.538177106Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=7ac2e507-e92d-4f0d-8178-7bce606bf4ae name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:23 functional-836309 crio[4225]: time="2025-09-17 00:27:23.538433951Z" level=info msg="Image docker.io/mysql:5.7 not found" id=7ac2e507-e92d-4f0d-8178-7bce606bf4ae name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:28 functional-836309 crio[4225]: time="2025-09-17 00:27:28.538674400Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=14d20acf-cb63-41d8-87f9-e6f6be24d5db name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:28 functional-836309 crio[4225]: time="2025-09-17 00:27:28.538679844Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f7363bb6-fb58-42b5-a328-1f67afece069 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:28 functional-836309 crio[4225]: time="2025-09-17 00:27:28.538948629Z" level=info msg="Image docker.io/nginx:alpine not found" id=f7363bb6-fb58-42b5-a328-1f67afece069 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:28 functional-836309 crio[4225]: time="2025-09-17 00:27:28.539070202Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=14d20acf-cb63-41d8-87f9-e6f6be24d5db name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:32 functional-836309 crio[4225]: time="2025-09-17 00:27:32.538851657Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=550fe214-d48d-4b6d-8925-0a4490f30ead name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:32 functional-836309 crio[4225]: time="2025-09-17 00:27:32.539143320Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=550fe214-d48d-4b6d-8925-0a4490f30ead name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:34 functional-836309 crio[4225]: time="2025-09-17 00:27:34.538527541Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e2cebebc-a8c7-473b-9b2e-e9f7bc889c8f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:34 functional-836309 crio[4225]: time="2025-09-17 00:27:34.538756834Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e2cebebc-a8c7-473b-9b2e-e9f7bc889c8f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:40 functional-836309 crio[4225]: time="2025-09-17 00:27:40.538691867Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=92dfc754-ebee-482e-b7f2-b9f4d621e6d0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:40 functional-836309 crio[4225]: time="2025-09-17 00:27:40.538731729Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=39817a34-ee91-42cf-a090-990f426d8aca name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:40 functional-836309 crio[4225]: time="2025-09-17 00:27:40.538937217Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=39817a34-ee91-42cf-a090-990f426d8aca name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:40 functional-836309 crio[4225]: time="2025-09-17 00:27:40.538935822Z" level=info msg="Image docker.io/nginx:alpine not found" id=92dfc754-ebee-482e-b7f2-b9f4d621e6d0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:46 functional-836309 crio[4225]: time="2025-09-17 00:27:46.538608773Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6a8ab42f-e8e2-409e-b96c-d33b755caa31 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:46 functional-836309 crio[4225]: time="2025-09-17 00:27:46.538879747Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=6a8ab42f-e8e2-409e-b96c-d33b755caa31 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:48 functional-836309 crio[4225]: time="2025-09-17 00:27:48.538222317Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=acfffda6-55f9-45be-9e5d-3dba0181e5ea name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:48 functional-836309 crio[4225]: time="2025-09-17 00:27:48.538508301Z" level=info msg="Image docker.io/mysql:5.7 not found" id=acfffda6-55f9-45be-9e5d-3dba0181e5ea name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:48 functional-836309 crio[4225]: time="2025-09-17 00:27:48.539150483Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=676992ee-f3eb-4d26-97b6-74233de64ed8 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:27:48 functional-836309 crio[4225]: time="2025-09-17 00:27:48.544112635Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 17 00:27:53 functional-836309 crio[4225]: time="2025-09-17 00:27:53.539823059Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=45420cd6-fbc1-4b80-9b8c-44d85b791bff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:53 functional-836309 crio[4225]: time="2025-09-17 00:27:53.540090266Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=45420cd6-fbc1-4b80-9b8c-44d85b791bff name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:54 functional-836309 crio[4225]: time="2025-09-17 00:27:54.538215165Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ff63008a-b797-4f40-80f4-2a499c99d28e name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:27:54 functional-836309 crio[4225]: time="2025-09-17 00:27:54.538457437Z" level=info msg="Image docker.io/nginx:alpine not found" id=ff63008a-b797-4f40-80f4-2a499c99d28e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb474edf243b1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   15 minutes ago      Exited              mount-munger              0                   d689b11bc9243       busybox-mount
	9f2aad7cc830a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      16 minutes ago      Running             kube-apiserver            0                   cb31a6d151f18       kube-apiserver-functional-836309
	8fc6aae6af439       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      16 minutes ago      Running             kube-controller-manager   2                   073e9000e2cbd       kube-controller-manager-functional-836309
	a14ceabc188eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      17 minutes ago      Running             etcd                      1                   bd997b17bb8d3       etcd-functional-836309
	888d62ee0b634       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      17 minutes ago      Exited              kube-controller-manager   1                   073e9000e2cbd       kube-controller-manager-functional-836309
	c06f60831d1a2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      17 minutes ago      Running             kube-proxy                1                   04529c3273474       kube-proxy-cbvjf
	64858777ddc03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      17 minutes ago      Running             kindnet-cni               1                   e619e5a0562ff       kindnet-h2rjf
	8414e6a217a0a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      17 minutes ago      Running             kube-scheduler            1                   c5ca55e367f9f       kube-scheduler-functional-836309
	8750ce41941ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       1                   9bd06274bf9f1       storage-provisioner
	9d874bdc79320       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Running             coredns                   1                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	43960daf0ceb5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Exited              coredns                   0                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	fee9c2e341d4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       0                   9bd06274bf9f1       storage-provisioner
	94e0331fcf046       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      17 minutes ago      Exited              kindnet-cni               0                   e619e5a0562ff       kindnet-h2rjf
	2590bb5313e64       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      17 minutes ago      Exited              kube-proxy                0                   04529c3273474       kube-proxy-cbvjf
	fd4423f996e17       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      18 minutes ago      Exited              kube-scheduler            0                   c5ca55e367f9f       kube-scheduler-functional-836309
	66e1997c75a09       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago      Exited              etcd                      0                   bd997b17bb8d3       etcd-functional-836309
	
	
	==> coredns [43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58276 - 22452 "HINFO IN 7807615287491316741.4205491171577213210. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036670075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d874bdc7932076f658b9567185beccffdb2e85d489d293dfe85e3e619013c1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34900 - 1175 "HINFO IN 6559932629016620651.4444246566734803126. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054012876s
	
	
	==> describe nodes <==
	Name:               functional-836309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-836309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=functional-836309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_09_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:09:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-836309
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:27:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:25:00 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:25:00 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:25:00 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:25:00 +0000   Wed, 17 Sep 2025 00:10:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-836309
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f7de0bcecd43499ea9b16c8c00a864
	  System UUID:                e097105d-a213-4ebf-95fe-cce4cad422c0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-m76kz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-54xkq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-l9pq7                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     16m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-zvmqf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     17m
	  kube-system                 etcd-functional-836309                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-h2rjf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-836309              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-836309     200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-cbvjf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-836309              100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-htbkl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lm4gk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     18m                kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           17m                node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	  Normal  NodeReady                17m                kubelet          Node functional-836309 status is now: NodeReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x8 over 16m)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d] <==
	{"level":"warn","ts":"2025-09-17T00:09:55.212910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.220259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.227159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.234529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.243853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.251054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.257902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:10:42.783237Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:10:42.783351Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-17T00:10:42.783494Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785304Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785881Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785904Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785429Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785929Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785969Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.785982Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.788632Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:10:49.788702Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.788727Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:10:49.788733Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a14ceabc188ebbf10535dda7c1f798592d2e79e03743ad28e2bd444ce75333ba] <==
	{"level":"warn","ts":"2025-09-17T00:11:02.799199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.806444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.812694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.819824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.828034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.834969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.841980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.849538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.863753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.870164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.878044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.884349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.890622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.898140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.905536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.912507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.926102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.939007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.982536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:21:02.483178Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1020}
	{"level":"info","ts":"2025-09-17T00:21:02.502430Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1020,"took":"18.848289ms","hash":1664117828,"current-db-size-bytes":3403776,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-09-17T00:21:02.502483Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1664117828,"revision":1020,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:26:02.487994Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1500}
	{"level":"info","ts":"2025-09-17T00:26:02.491714Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1500,"took":"3.288748ms","hash":4255473852,"current-db-size-bytes":3403776,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":2572288,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2025-09-17T00:26:02.491769Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4255473852,"revision":1500,"compact-revision":1020}
	
	
	==> kernel <==
	 00:27:58 up  3:10,  0 users,  load average: 0.00, 0.16, 6.00
	Linux functional-836309 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [64858777ddc0357994b52a6fd8bf79dba5ac39143453505e0f08e2a242aecae8] <==
	I0917 00:25:53.722529       1 main.go:301] handling current node
	I0917 00:26:03.724366       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:03.724419       1 main.go:301] handling current node
	I0917 00:26:13.723904       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:13.723952       1 main.go:301] handling current node
	I0917 00:26:23.720517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:23.720552       1 main.go:301] handling current node
	I0917 00:26:33.717187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:33.717242       1 main.go:301] handling current node
	I0917 00:26:43.716564       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:43.716600       1 main.go:301] handling current node
	I0917 00:26:53.716975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:26:53.717033       1 main.go:301] handling current node
	I0917 00:27:03.725206       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:27:03.725257       1 main.go:301] handling current node
	I0917 00:27:13.718586       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:27:13.718620       1 main.go:301] handling current node
	I0917 00:27:23.718315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:27:23.718364       1 main.go:301] handling current node
	I0917 00:27:33.716610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:27:33.716664       1 main.go:301] handling current node
	I0917 00:27:43.716734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:27:43.716780       1 main.go:301] handling current node
	I0917 00:27:53.720547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:27:53.720593       1 main.go:301] handling current node
	
	
	==> kindnet [94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177] <==
	I0917 00:10:04.407562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0917 00:10:04.407829       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0917 00:10:04.407974       1 main.go:148] setting mtu 1500 for CNI 
	I0917 00:10:04.407992       1 main.go:178] kindnetd IP family: "ipv4"
	I0917 00:10:04.408041       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-17T00:10:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0917 00:10:04.608241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0917 00:10:04.608325       1 controller.go:381] "Waiting for informer caches to sync"
	I0917 00:10:04.608338       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0917 00:10:04.608850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0917 00:10:05.008798       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0917 00:10:05.008823       1 metrics.go:72] Registering metrics
	I0917 00:10:05.008870       1 controller.go:711] "Syncing nftables rules"
	I0917 00:10:14.613627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:14.613697       1 main.go:301] handling current node
	I0917 00:10:24.615570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:24.615608       1 main.go:301] handling current node
	I0917 00:10:34.612524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:34.612559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f2aad7cc830a3ec57ba1b3d2cd335c4f402ff995fba44cd8dd9944ea36855bb] <==
	I0917 00:15:47.255026       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:24.666189       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:12.893722       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:41.988170       1 controller.go:667] quota admission added evaluator for: namespaces
	I0917 00:17:42.111550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.15.120"}
	I0917 00:17:42.123959       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.205.187"}
	I0917 00:17:49.333363       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:56.710543       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.163.232"}
	I0917 00:18:40.466273       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:19:10.646833       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:20:06.755790       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:20:15.654078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:03.383145       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:21:15.681088       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:44.816934       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:22:16.629730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:22:49.667816       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:23:23.832323       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:23:58.536455       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:24:48.772842       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:25:08.408966       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:25:55.188361       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:26:31.172963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:27:22.801733       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:27:58.858851       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [888d62ee0b634c673d1878ce150c6f0034e298592a41de5b4a133d003db1a139] <==
	I0917 00:10:43.989698       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:44.308654       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0917 00:10:44.308686       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:44.310251       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 00:10:44.310301       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:10:44.310653       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0917 00:10:44.310800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 00:10:56.321004       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8fc6aae6af439080e3411b9cb8143eddc1da6c5a6e3211c2a191a3dbfa865ca9] <==
	I0917 00:11:06.793750       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:11:06.793797       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0917 00:11:06.795031       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0917 00:11:06.795086       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:11:06.795122       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:11:06.795131       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:11:06.795137       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:11:06.795175       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:11:06.795208       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:11:06.797152       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:11:06.798500       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:11:06.800827       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:11:06.800851       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:11:06.800859       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:11:06.800834       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:11:06.803222       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:11:06.805177       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:11:06.807633       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:11:06.816406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:17:42.036751       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.041055       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.045573       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.045609       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.049857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.055310       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485] <==
	I0917 00:10:04.193311       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:04.263769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:04.364709       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:04.364767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:04.364855       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:04.385096       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:04.385159       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:04.390876       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:04.391486       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:04.391511       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:04.393121       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:04.393158       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:04.393167       1 config.go:200] "Starting service config controller"
	I0917 00:10:04.393187       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:04.393201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:04.393189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:04.393246       1 config.go:309] "Starting node config controller"
	I0917 00:10:04.393260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:04.493462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:04.493439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c06f60831d1a27beead1133ee09bd56597eea7ed1a44bd377eb0a2445447cee8] <==
	I0917 00:10:43.389590       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:43.460712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:43.561820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:43.561866       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:43.561957       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:43.585276       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:43.585350       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:43.590785       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:43.591164       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:43.591200       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:43.593011       1 config.go:200] "Starting service config controller"
	I0917 00:10:43.593356       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:43.593113       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:43.593126       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:43.593435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:43.593437       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:43.593165       1 config.go:309] "Starting node config controller"
	I0917 00:10:43.593494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:43.593503       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:43.693526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:43.693578       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:10:43.693636       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8414e6a217a0a65711aa4a8781ace6ed51c30407bf0166b9c4024dad4b506e9c] <==
	I0917 00:10:44.134044       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:51.491622       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:10:51.491651       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:51.496210       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496222       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:10:51.496254       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496251       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496256       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.496635       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:10:51.496706       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:10:51.596824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.597020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.597094       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:11:03.387571       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:11:03.387692       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:11:03.387722       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:11:03.387745       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:11:03.387764       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:11:03.387800       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	
	
	==> kube-scheduler [fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306] <==
	E0917 00:09:56.365834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:09:56.365882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:09:56.366012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:09:56.366067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:09:56.366104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0917 00:09:56.366185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:09:56.366277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:09:56.366176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0917 00:09:56.366533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:09:56.366612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:09:56.366642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:09:56.366681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:09:56.366732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:09:56.366735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0917 00:09:56.366804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:09:56.366825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0917 00:09:56.366896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0917 00:09:56.366939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0917 00:09:57.962884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.641974       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:10:42.642087       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.642285       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:10:42.642311       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:10:42.642328       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:10:42.642359       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:27:23 functional-836309 kubelet[5462]: E0917 00:27:23.538693    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:27:25 functional-836309 kubelet[5462]: E0917 00:27:25.538140    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:27:27 functional-836309 kubelet[5462]: E0917 00:27:27.538045    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-54xkq" podUID="2d5c821a-47c0-4488-b33d-e43b5a07a2f0"
	Sep 17 00:27:28 functional-836309 kubelet[5462]: E0917 00:27:28.539308    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:27:28 functional-836309 kubelet[5462]: E0917 00:27:28.539406    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-htbkl" podUID="a547af32-a08d-4709-9ee2-63f12a40647a"
	Sep 17 00:27:31 functional-836309 kubelet[5462]: E0917 00:27:31.538895    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:27:31 functional-836309 kubelet[5462]: E0917 00:27:31.708688    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068851708439008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:27:31 functional-836309 kubelet[5462]: E0917 00:27:31.708733    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068851708439008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:27:32 functional-836309 kubelet[5462]: E0917 00:27:32.539515    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lm4gk" podUID="3f7e653f-cd38-4dd9-8d08-5632496af8f8"
	Sep 17 00:27:34 functional-836309 kubelet[5462]: E0917 00:27:34.539121    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:27:40 functional-836309 kubelet[5462]: E0917 00:27:40.538587    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:27:40 functional-836309 kubelet[5462]: E0917 00:27:40.539203    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:27:40 functional-836309 kubelet[5462]: E0917 00:27:40.539274    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-htbkl" podUID="a547af32-a08d-4709-9ee2-63f12a40647a"
	Sep 17 00:27:41 functional-836309 kubelet[5462]: E0917 00:27:41.538825    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-54xkq" podUID="2d5c821a-47c0-4488-b33d-e43b5a07a2f0"
	Sep 17 00:27:41 functional-836309 kubelet[5462]: E0917 00:27:41.710141    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068861709907522  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:27:41 functional-836309 kubelet[5462]: E0917 00:27:41.710178    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068861709907522  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:27:44 functional-836309 kubelet[5462]: E0917 00:27:44.538246    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:27:46 functional-836309 kubelet[5462]: E0917 00:27:46.539229    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lm4gk" podUID="3f7e653f-cd38-4dd9-8d08-5632496af8f8"
	Sep 17 00:27:51 functional-836309 kubelet[5462]: E0917 00:27:51.711811    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068871711487356  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:27:51 functional-836309 kubelet[5462]: E0917 00:27:51.711857    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068871711487356  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:200836}  inodes_used:{value:104}}"
	Sep 17 00:27:52 functional-836309 kubelet[5462]: E0917 00:27:52.537953    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-54xkq" podUID="2d5c821a-47c0-4488-b33d-e43b5a07a2f0"
	Sep 17 00:27:53 functional-836309 kubelet[5462]: E0917 00:27:53.540366    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-htbkl" podUID="a547af32-a08d-4709-9ee2-63f12a40647a"
	Sep 17 00:27:54 functional-836309 kubelet[5462]: E0917 00:27:54.538842    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:27:55 functional-836309 kubelet[5462]: E0917 00:27:55.538460    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:27:57 functional-836309 kubelet[5462]: E0917 00:27:57.537813    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	
	
	==> storage-provisioner [8750ce41941ba15a9b4b2e19cfe5128979331c1400a49209e1f4efb5b1318340] <==
	W0917 00:27:33.639133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:35.642219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:35.646584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:37.650257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:37.655245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:39.658210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:39.663535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:41.666691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:41.671320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:43.674996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:43.680260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:45.683415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:45.687497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:47.691502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:47.697314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:49.700653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:49.704792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:51.707685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:51.712170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:53.716000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:53.721784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:55.724613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:55.728946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:57.733511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:27:57.738351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f] <==
	W0917 00:10:17.462196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.465941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.471590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.475172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.479508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.483478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.491192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.495638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.501797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.506026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.512276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.515329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.519407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.522663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.529122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.532130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.536263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.539874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.544694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.549064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.553478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.557571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.563110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.566878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.571434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
helpers_test.go:269: (dbg) Run:  kubectl --context functional-836309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk: exit status 1 (105.736994ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://cb474edf243b1a8e4e93b368e7e6be5f76c0c8b839e74e1c49c1a7bff20a0680
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Sep 2025 00:12:00 +0000
	      Finished:     Wed, 17 Sep 2025 00:12:00 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvp4d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zvp4d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16m   default-scheduler  Successfully assigned default/busybox-mount to functional-836309
	  Normal  Pulling    16m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.264s (28.084s including waiting). Image size: 4631262 bytes.
	  Normal  Created    15m   kubelet            Created container: mount-munger
	  Normal  Started    15m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-m76kz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:25 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4fhc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c4fhc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  16m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-m76kz to functional-836309
	  Normal   Pulling    11m (x5 over 16m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     10m (x5 over 16m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     10m (x5 over 16m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    92s (x47 over 16m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     92s (x47 over 16m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-54xkq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:17:56 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ldx8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9ldx8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-54xkq to functional-836309
	  Normal   Pulling    2m16s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     114s (x5 over 9m17s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     114s (x5 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     47s (x16 over 9m17s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    7s (x19 over 9m17s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-l9pq7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-76bnk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-76bnk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/mysql-5bb876957f-l9pq7 to functional-836309
	  Normal   Pulling    10m (x5 over 16m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     8m47s (x5 over 16m)  kubelet            Error: ErrImagePull
	  Warning  Failed     5m16s (x6 over 16m)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    84s (x40 over 16m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     59s (x42 over 16m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:12:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2v8fx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2v8fx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/nginx-svc to functional-836309
	  Normal   Pulling    8m59s (x5 over 15m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m46s (x5 over 14m)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m46s (x5 over 14m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x19 over 14m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    45s (x34 over 14m)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:37 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85lfd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-85lfd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/sp-pod to functional-836309
	  Normal   Pulling    9m34s (x5 over 16m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     8m17s (x5 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     4m16s (x6 over 15m)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    73s (x38 over 15m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     19s (x42 over 15m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-htbkl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lm4gk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4148aae6-c97a-4dec-98b0-172efdad09fb] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005848448s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-836309 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-836309 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-836309 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-836309 apply -f testdata/storage-provisioner/pod.yaml
I0917 00:11:37.423409  521273 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0f84d084-6e2e-4197-b486-4ba402096a6c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-17 00:17:37.745586008 +0000 UTC m=+1765.212929586
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-836309 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-836309 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-836309/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:11:37 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85lfd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-85lfd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m                  default-scheduler  Successfully assigned default/sp-pod to functional-836309
Normal   Pulling    72s (x4 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     36s (x4 over 5m7s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     36s (x4 over 5m7s)  kubelet            Error: ErrImagePull
Normal   BackOff    1s (x8 over 5m7s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     1s (x8 over 5m7s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-836309 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-836309 logs sp-pod -n default: exit status 1 (69.879626ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-836309 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-836309
helpers_test.go:243: (dbg) docker inspect functional-836309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	        "Created": "2025-09-17T00:09:44.133139993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564972,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:09:44.169133569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hosts",
	        "LogPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5-json.log",
	        "Name": "/functional-836309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-836309:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-836309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	                "LowerDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-836309",
	                "Source": "/var/lib/docker/volumes/functional-836309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-836309",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-836309",
	                "name.minikube.sigs.k8s.io": "functional-836309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23448c026a24457ded735e88238de72a95f1b2d956a93efb7f9494b958befb64",
	            "SandboxKey": "/var/run/docker/netns/23448c026a24",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-836309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:01:e3:2b:98:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f11c0adeed5b0a571ce66bcfa96404e5751f9da2bd5366531798e16160202bd2",
	                    "EndpointID": "47b04d28f82bdaef821c6f0a8dc045f3604bb616ac73b4ea262d9bb6aa905794",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-836309",
	                        "3ec3e877de9b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-836309 -n functional-836309
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 logs -n 25: (1.56555311s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-836309 ssh -- ls -la /mount-9p                                                                                         │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ image   │ functional-836309 image ls                                                                                                        │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ functional-836309 ssh cat /mount-9p/test-1758067890026816102                                                                      │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ image   │ functional-836309 image save --daemon kicbase/echo-server:functional-836309 --alsologtostderr                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:11 UTC │ 17 Sep 25 00:11 UTC │
	│ ssh     │ functional-836309 ssh stat /mount-9p/created-by-test                                                                              │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh     │ functional-836309 ssh stat /mount-9p/created-by-pod                                                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh     │ functional-836309 ssh sudo umount -f /mount-9p                                                                                    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ mount   │ -p functional-836309 /tmp/TestFunctionalparallelMountCmdspecific-port2099491116/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh     │ functional-836309 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh     │ functional-836309 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh     │ functional-836309 ssh -- ls -la /mount-9p                                                                                         │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh     │ functional-836309 ssh sudo umount -f /mount-9p                                                                                    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount   │ -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount3 --alsologtostderr -v=1                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount   │ -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount1 --alsologtostderr -v=1                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount   │ -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount2 --alsologtostderr -v=1                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh     │ functional-836309 ssh findmnt -T /mount1                                                                                          │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh     │ functional-836309 ssh findmnt -T /mount1                                                                                          │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh     │ functional-836309 ssh findmnt -T /mount2                                                                                          │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh     │ functional-836309 ssh findmnt -T /mount3                                                                                          │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ mount   │ -p functional-836309 --kill=true                                                                                                  │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh     │ functional-836309 ssh echo hello                                                                                                  │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh     │ functional-836309 ssh cat /etc/hostname                                                                                           │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ tunnel  │ functional-836309 tunnel --alsologtostderr                                                                                        │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel  │ functional-836309 tunnel --alsologtostderr                                                                                        │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel  │ functional-836309 tunnel --alsologtostderr                                                                                        │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:10:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:10:32.593079  570486 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:10:32.593345  570486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:10:32.593349  570486 out.go:374] Setting ErrFile to fd 2...
	I0917 00:10:32.593352  570486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:10:32.593666  570486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:10:32.594143  570486 out.go:368] Setting JSON to false
	I0917 00:10:32.595133  570486 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10376,"bootTime":1758057457,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:10:32.595232  570486 start.go:140] virtualization: kvm guest
	I0917 00:10:32.597354  570486 out.go:179] * [functional-836309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:10:32.598741  570486 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:10:32.598755  570486 notify.go:220] Checking for updates...
	I0917 00:10:32.601642  570486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:10:32.603249  570486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:10:32.604732  570486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:10:32.606170  570486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:10:32.607512  570486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:10:32.609704  570486 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:10:32.609818  570486 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:10:32.635537  570486 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:10:32.635657  570486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:10:32.697898  570486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-17 00:10:32.686036652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:10:32.698014  570486 docker.go:318] overlay module found
	I0917 00:10:32.700783  570486 out.go:179] * Using the docker driver based on existing profile
	I0917 00:10:32.702507  570486 start.go:304] selected driver: docker
	I0917 00:10:32.702521  570486 start.go:918] validating driver "docker" against &{Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:10:32.702616  570486 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:10:32.702715  570486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:10:32.764917  570486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:68 SystemTime:2025-09-17 00:10:32.752200338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:10:32.765634  570486 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:10:32.765657  570486 cni.go:84] Creating CNI manager for ""
	I0917 00:10:32.765720  570486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:10:32.765766  570486 start.go:348] cluster config:
	{Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:10:32.768071  570486 out.go:179] * Starting "functional-836309" primary control-plane node in "functional-836309" cluster
	I0917 00:10:32.769492  570486 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:10:32.771034  570486 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:10:32.772426  570486 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:10:32.772476  570486 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:10:32.772484  570486 cache.go:58] Caching tarball of preloaded images
	I0917 00:10:32.772563  570486 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:10:32.772595  570486 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:10:32.772605  570486 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:10:32.772758  570486 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/config.json ...
	I0917 00:10:32.794571  570486 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:10:32.794583  570486 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:10:32.794604  570486 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:10:32.794634  570486 start.go:360] acquireMachinesLock for functional-836309: {Name:mke54c6ba7e9a6839b9c8620d1f7f0f7c86e0ee5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:10:32.794701  570486 start.go:364] duration metric: took 47.83µs to acquireMachinesLock for "functional-836309"
	I0917 00:10:32.794720  570486 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:10:32.794725  570486 fix.go:54] fixHost starting: 
	I0917 00:10:32.795019  570486 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
	I0917 00:10:32.815148  570486 fix.go:112] recreateIfNeeded on functional-836309: state=Running err=<nil>
	W0917 00:10:32.815173  570486 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:10:32.817429  570486 out.go:252] * Updating the running docker "functional-836309" container ...
	I0917 00:10:32.817478  570486 machine.go:93] provisionDockerMachine start ...
	I0917 00:10:32.817556  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:32.836896  570486 main.go:141] libmachine: Using SSH client type: native
	I0917 00:10:32.837148  570486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0917 00:10:32.837156  570486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:10:32.974785  570486 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-836309
	
	I0917 00:10:32.974808  570486 ubuntu.go:182] provisioning hostname "functional-836309"
	I0917 00:10:32.974866  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:32.993788  570486 main.go:141] libmachine: Using SSH client type: native
	I0917 00:10:32.994015  570486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0917 00:10:32.994022  570486 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-836309 && echo "functional-836309" | sudo tee /etc/hostname
	I0917 00:10:33.147119  570486 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-836309
	
	I0917 00:10:33.147201  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:33.165597  570486 main.go:141] libmachine: Using SSH client type: native
	I0917 00:10:33.165809  570486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0917 00:10:33.165821  570486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-836309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-836309/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-836309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:10:33.303458  570486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:10:33.303480  570486 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:10:33.303536  570486 ubuntu.go:190] setting up certificates
	I0917 00:10:33.303550  570486 provision.go:84] configureAuth start
	I0917 00:10:33.303616  570486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-836309
	I0917 00:10:33.323243  570486 provision.go:143] copyHostCerts
	I0917 00:10:33.323310  570486 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:10:33.323317  570486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:10:33.323380  570486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:10:33.323540  570486 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:10:33.323547  570486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:10:33.323577  570486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:10:33.323650  570486 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:10:33.323653  570486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:10:33.323675  570486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:10:33.323735  570486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.functional-836309 san=[127.0.0.1 192.168.49.2 functional-836309 localhost minikube]
	I0917 00:10:33.401327  570486 provision.go:177] copyRemoteCerts
	I0917 00:10:33.401387  570486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:10:33.401441  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:33.420828  570486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
	I0917 00:10:33.520806  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:10:33.549186  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 00:10:33.576882  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:10:33.604162  570486 provision.go:87] duration metric: took 300.598779ms to configureAuth
	I0917 00:10:33.604183  570486 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:10:33.604429  570486 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:10:33.604571  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:33.623828  570486 main.go:141] libmachine: Using SSH client type: native
	I0917 00:10:33.624056  570486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I0917 00:10:33.624067  570486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:10:34.032163  570486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:10:34.032181  570486 machine.go:96] duration metric: took 1.21469579s to provisionDockerMachine
	I0917 00:10:34.032191  570486 start.go:293] postStartSetup for "functional-836309" (driver="docker")
	I0917 00:10:34.032200  570486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:10:34.032272  570486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:10:34.032322  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:34.053306  570486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
	I0917 00:10:34.153856  570486 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:10:34.157732  570486 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:10:34.157751  570486 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:10:34.157757  570486 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:10:34.157762  570486 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:10:34.157773  570486 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:10:34.157832  570486 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:10:34.157898  570486 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:10:34.157972  570486 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/test/nested/copy/521273/hosts -> hosts in /etc/test/nested/copy/521273
	I0917 00:10:34.158008  570486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/521273
	I0917 00:10:34.168007  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:10:34.195504  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/test/nested/copy/521273/hosts --> /etc/test/nested/copy/521273/hosts (40 bytes)
	I0917 00:10:34.222237  570486 start.go:296] duration metric: took 190.032235ms for postStartSetup
	I0917 00:10:34.222318  570486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:10:34.222352  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:34.241037  570486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
	I0917 00:10:34.336796  570486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:10:34.342494  570486 fix.go:56] duration metric: took 1.547759424s for fixHost
	I0917 00:10:34.342514  570486 start.go:83] releasing machines lock for "functional-836309", held for 1.547804887s
	I0917 00:10:34.342580  570486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-836309
	I0917 00:10:34.362669  570486 ssh_runner.go:195] Run: cat /version.json
	I0917 00:10:34.362721  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:34.362768  570486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:10:34.362829  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:10:34.382434  570486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
	I0917 00:10:34.383227  570486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
	I0917 00:10:34.547784  570486 ssh_runner.go:195] Run: systemctl --version
	I0917 00:10:34.552749  570486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:10:34.696538  570486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:10:34.701524  570486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:10:34.710952  570486 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:10:34.711021  570486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:10:34.720376  570486 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:10:34.720405  570486 start.go:495] detecting cgroup driver to use...
	I0917 00:10:34.720462  570486 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:10:34.720500  570486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:10:34.733738  570486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:10:34.746598  570486 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:10:34.746641  570486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:10:34.761064  570486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:10:34.773591  570486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:10:34.882776  570486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:10:34.997205  570486 docker.go:234] disabling docker service ...
	I0917 00:10:34.997324  570486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:10:35.012756  570486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:10:35.026495  570486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:10:35.137504  570486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:10:35.248412  570486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:10:35.260590  570486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:10:35.278267  570486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:10:35.278341  570486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:10:35.290481  570486 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:10:35.290548  570486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:10:35.301747  570486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:10:35.313153  570486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:10:35.324665  570486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:10:35.334723  570486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:10:35.345791  570486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:10:35.355798  570486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:10:35.366214  570486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:10:35.375080  570486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:10:35.384524  570486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:10:35.496299  570486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:10:35.753298  570486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:10:35.753359  570486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:10:35.757669  570486 start.go:563] Will wait 60s for crictl version
	I0917 00:10:35.757718  570486 ssh_runner.go:195] Run: which crictl
	I0917 00:10:35.761414  570486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:10:35.798948  570486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:10:35.799033  570486 ssh_runner.go:195] Run: crio --version
	I0917 00:10:35.837581  570486 ssh_runner.go:195] Run: crio --version
	I0917 00:10:35.881915  570486 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:10:35.883562  570486 cli_runner.go:164] Run: docker network inspect functional-836309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:10:35.903015  570486 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:10:35.909306  570486 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0917 00:10:35.910806  570486 kubeadm.go:875] updating cluster {Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:10:35.910926  570486 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:10:35.910995  570486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:10:35.955172  570486 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:10:35.955185  570486 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:10:35.955241  570486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:10:35.992525  570486 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:10:35.992539  570486 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:10:35.992545  570486 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0917 00:10:35.992647  570486 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-836309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:10:35.992710  570486 ssh_runner.go:195] Run: crio config
	I0917 00:10:36.038898  570486 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0917 00:10:36.038972  570486 cni.go:84] Creating CNI manager for ""
	I0917 00:10:36.038982  570486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:10:36.038993  570486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:10:36.039013  570486 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-836309 NodeName:functional-836309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:10:36.039150  570486 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-836309"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:10:36.039205  570486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:10:36.049646  570486 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:10:36.049724  570486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:10:36.059717  570486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0917 00:10:36.080273  570486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:10:36.101077  570486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I0917 00:10:36.122421  570486 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:10:36.126913  570486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:10:36.234724  570486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:10:36.247925  570486 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309 for IP: 192.168.49.2
	I0917 00:10:36.247942  570486 certs.go:194] generating shared ca certs ...
	I0917 00:10:36.247964  570486 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:10:36.248115  570486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:10:36.248147  570486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:10:36.248153  570486 certs.go:256] generating profile certs ...
	I0917 00:10:36.248246  570486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.key
	I0917 00:10:36.248292  570486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/apiserver.key.94a23a78
	I0917 00:10:36.248325  570486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/proxy-client.key
	I0917 00:10:36.248459  570486 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:10:36.248483  570486 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:10:36.248489  570486 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:10:36.248512  570486 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:10:36.248529  570486 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:10:36.248546  570486 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:10:36.248579  570486 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:10:36.249149  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:10:36.275849  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:10:36.303975  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:10:36.331792  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:10:36.358921  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:10:36.387313  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:10:36.416015  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:10:36.444596  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 00:10:36.472750  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:10:36.500142  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:10:36.527470  570486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:10:36.554986  570486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:10:36.576370  570486 ssh_runner.go:195] Run: openssl version
	I0917 00:10:36.582602  570486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:10:36.593903  570486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:10:36.598411  570486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:10:36.598464  570486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:10:36.605927  570486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:10:36.616773  570486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:10:36.627595  570486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:10:36.631441  570486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:10:36.631488  570486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:10:36.638903  570486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:10:36.649665  570486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:10:36.660478  570486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:10:36.664499  570486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:10:36.664547  570486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:10:36.671785  570486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:10:36.681940  570486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:10:36.685782  570486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:10:36.693167  570486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:10:36.700526  570486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:10:36.708130  570486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:10:36.715930  570486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:10:36.724028  570486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:10:36.731537  570486 kubeadm.go:392] StartCluster: {Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:10:36.731637  570486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:10:36.731715  570486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:10:36.769850  570486 cri.go:89] found id: "43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330"
	I0917 00:10:36.769862  570486 cri.go:89] found id: "fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f"
	I0917 00:10:36.769864  570486 cri.go:89] found id: "94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177"
	I0917 00:10:36.769866  570486 cri.go:89] found id: "2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485"
	I0917 00:10:36.769868  570486 cri.go:89] found id: "e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b"
	I0917 00:10:36.769870  570486 cri.go:89] found id: "fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306"
	I0917 00:10:36.769871  570486 cri.go:89] found id: "66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d"
	I0917 00:10:36.769873  570486 cri.go:89] found id: "d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498"
	I0917 00:10:36.769874  570486 cri.go:89] found id: ""
	I0917 00:10:36.769910  570486 ssh_runner.go:195] Run: sudo runc list -f json
	I0917 00:10:36.792993  570486 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485","pid":1898,"status":"running","bundle":"/run/containers/storage/overlay-containers/2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485/userdata","rootfs":"/var/lib/containers/storage/overlay/c5c5d5c38c533f89a6fa7956a03b1d3e9f6f309ac4b7eb181593591b6c3b3bbe/merged","created":"2025-09-17T00:10:04.114004644Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e2e56a4","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e2e56a4\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:10:04.049737017Z","io.kubernetes.cri-o.Image":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.34.0","io.kubernetes.cri-o.ImageRef":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-cbvjf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4e12d004-8422-442f-89a4-6455461dbebc\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-cbvjf_4e12d004-8422-442f-89a4-6455461dbebc/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/li
b/containers/storage/overlay/c5c5d5c38c533f89a6fa7956a03b1d3e9f6f309ac4b7eb181593591b6c3b3bbe/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-cbvjf_kube-system_4e12d004-8422-442f-89a4-6455461dbebc_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/04529c327347408f4830d101fcadc3e80af0f11d583d7d398af553996092f1b6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"04529c327347408f4830d101fcadc3e80af0f11d583d7d398af553996092f1b6","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-cbvjf_kube-system_4e12d004-8422-442f-89a4-6455461dbebc_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selin
ux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4e12d004-8422-442f-89a4-6455461dbebc/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4e12d004-8422-442f-89a4-6455461dbebc/containers/kube-proxy/6995373a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/4e12d004-8422-442f-89a4-6455461dbebc/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/4e12d004-8422-442f-89a4-6455461dbebc/volumes/kubernetes.io~projected/kube-api-access-mjf4s\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-cbvjf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.t
erminationGracePeriod":"30","io.kubernetes.pod.uid":"4e12d004-8422-442f-89a4-6455461dbebc","kubernetes.io/config.seen":"2025-09-17T00:10:03.679904370Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330","pid":2203,"status":"running","bundle":"/run/containers/storage/overlay-containers/43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330/userdata","rootfs":"/var/lib/containers/storage/overlay/88b00b68701bc2514fbe9f874cd80afd0657a501e145edb4da471c7011f45e8e/merged","created":"2025-09-17T00:10:15.398723548Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9bf792","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9bf792\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"liveness-probe\\\",\\\"containerPort\\\":8080,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"readiness-probe\\\",\\\"containerPort\\\":8181,\\\
"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:10:15.35897164Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri-o.ImageRef":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bc5c9577-zvmqf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"63ff7599-3b25-4ca5-846c-23262
1ea9f1e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bc5c9577-zvmqf_63ff7599-3b25-4ca5-846c-232621ea9f1e/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/88b00b68701bc2514fbe9f874cd80afd0657a501e145edb4da471c7011f45e8e/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bc5c9577-zvmqf_kube-system_63ff7599-3b25-4ca5-846c-232621ea9f1e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4111a7c1816a0afea42f2ee63cf232194b277f23b56a61bb44aa0ca84b01af15/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4111a7c1816a0afea42f2ee63cf232194b277f23b56a61bb44aa0ca84b01af15","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bc5c9577-zvmqf_kube-system_63ff7599-3b25-4ca5-846c-232621ea9f1e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes"
:"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/63ff7599-3b25-4ca5-846c-232621ea9f1e/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/63ff7599-3b25-4ca5-846c-232621ea9f1e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/63ff7599-3b25-4ca5-846c-232621ea9f1e/containers/coredns/04665e31\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/63ff7599-3b25-4ca5-846c-232621ea9f1e/volumes/kubernetes.io~projected/kube-api-access-r2ksr\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bc5c9577-zvmqf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGraceP
eriod":"30","io.kubernetes.pod.uid":"63ff7599-3b25-4ca5-846c-232621ea9f1e","kubernetes.io/config.seen":"2025-09-17T00:10:14.994686801Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d","pid":1445,"status":"running","bundle":"/run/containers/storage/overlay-containers/66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d/userdata","rootfs":"/var/lib/containers/storage/overlay/c2d6fdfe5cd65eb7d7323b679bd4357ed662c97a2ebc3b7e1b67ca5114b4b9ad/merged","created":"2025-09-17T00:09:54.333564175Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\
"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:09:54.269223278Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Imag
eName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-836309\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"90ea92284f079327376eda1737195857\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-836309_90ea92284f079327376eda1737195857/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c2d6fdfe5cd65eb7d7323b679bd4357ed662c97a2ebc3b7e1b67ca5114b4b9ad/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-836309_kube-system_90ea92284f079327376eda1737195857_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/bd997b17bb8d30b3139abfdcf571d5aa82235190f07c2a1cf8caa33e27093b44/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"bd997b17bb8d30b3139abf
dcf571d5aa82235190f07c2a1cf8caa33e27093b44","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-836309_kube-system_90ea92284f079327376eda1737195857_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/90ea92284f079327376eda1737195857/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/90ea92284f079327376eda1737195857/containers/etcd/1c650e8e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"seli
nux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-836309","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"90ea92284f079327376eda1737195857","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"90ea92284f079327376eda1737195857","kubernetes.io/config.seen":"2025-09-17T00:09:53.747000612Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177","pid":1904,"status":"running","bundle":"/run/containers/storage/overlay-containers/94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177/userdata","rootfs":"/var/lib/containers/storage/overlay/ba0272a4a5ee5a7a5c1b138b7939
101955c3ee27f4663dcd3ddd5ec8fbb1e3d7/merged","created":"2025-09-17T00:10:04.116689516Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"127fdb84","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"127fdb84\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:10:04.059678592Z","io.kubernetes.cri-o.Image":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","i
o.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri-o.ImageRef":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-h2rjf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7138d7c0-f231-4d14-b296-954bc2c7b30f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-h2rjf_7138d7c0-f231-4d14-b296-954bc2c7b30f/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ba0272a4a5ee5a7a5c1b138b7939101955c3ee27f4663dcd3ddd5ec8fbb1e3d7/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-h2rjf_kube-system_7138d7c0-f231-4d14-b296-954bc2c7b30f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e619e5a0562ffcd7bc482f117b62601631265ce9645edf2aedb693e7c1a2fc70/userdata/resolv.conf","i
o.kubernetes.cri-o.SandboxID":"e619e5a0562ffcd7bc482f117b62601631265ce9645edf2aedb693e7c1a2fc70","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-h2rjf_kube-system_7138d7c0-f231-4d14-b296-954bc2c7b30f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7138d7c0-f231-4d14-b296-954bc2c7b30f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7138d7c0-f231-4d14-b296-954bc2c7b30f/containers/kindnet-cni/4d72a211\",\"readonly\":false,\"pr
opagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7138d7c0-f231-4d14-b296-954bc2c7b30f/volumes/kubernetes.io~projected/kube-api-access-k9vvw\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-h2rjf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7138d7c0-f231-4d14-b296-954bc2c7b30f","kubernetes.io/config.seen":"2025-09-17T00:10:03.681135711Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d76ec241bed3bcb57087cc1c50
58989fed0c0f9f2a54e1df71b62af8dd654498","pid":1400,"status":"running","bundle":"/run/containers/storage/overlay-containers/d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498/userdata","rootfs":"/var/lib/containers/storage/overlay/69ee255b8779d533ce6aa66ad9b9c699e030de87bf4f7c77811ae7364414db01/merged","created":"2025-09-17T00:09:54.291230459Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8441,\\\"containerPort\\\":8441,\\\"protocol\\\":\\\"TCP\\\"}]
\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:09:54.237475068Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-836309\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"603b51070fa2e0162ede61a457f6e7be\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods
/kube-system_kube-apiserver-functional-836309_603b51070fa2e0162ede61a457f6e7be/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/69ee255b8779d533ce6aa66ad9b9c699e030de87bf4f7c77811ae7364414db01/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-836309_kube-system_603b51070fa2e0162ede61a457f6e7be_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/eee09ead52099797f8fcd3f4068186fe5caac1b13683a014c5840ee189b98b28/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"eee09ead52099797f8fcd3f4068186fe5caac1b13683a014c5840ee189b98b28","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-836309_kube-system_603b51070fa2e0162ede61a457f6e7be_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":
\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/603b51070fa2e0162ede61a457f6e7be/containers/kube-apiserver/9703226d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/603b51070fa2e0162ede61a457f6e7be/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certifica
tes\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-functional-836309","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"603b51070fa2e0162ede61a457f6e7be","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"603b51070fa2e0162ede61a457f6e7be","kubernetes.io/config.seen":"2025-09-17T00:09:53.747005704Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b","pid":1465,"status":"running","bundle":"/run/containers/storage/overlay-containers/e07f8f9f5c0fd9fc2a846a6aa616e4347
5343e86bfb9b1f1d885429905018f2b/userdata","rootfs":"/var/lib/containers/storage/overlay/584abdc13d34f3ab3c520d5e60c4547ccb059682739067e72bdea8979bc60e43/merged","created":"2025-09-17T00:09:54.34006882Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.co
ntainer.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:09:54.271193967Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-836309\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9e747d5c637f51047e37ce486d158585\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-836309_9e747d5c637f51047e37ce486d158585/kube-controller-manager/0.log
","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/584abdc13d34f3ab3c520d5e60c4547ccb059682739067e72bdea8979bc60e43/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-836309_kube-system_9e747d5c637f51047e37ce486d158585_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/073e9000e2cbd06f3aad6cf03fd7be4d1b371c4a67572bc98469644f4edf90f3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"073e9000e2cbd06f3aad6cf03fd7be4d1b371c4a67572bc98469644f4edf90f3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-836309_kube-system_9e747d5c637f51047e37ce486d158585_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\"
,\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9e747d5c637f51047e37ce486d158585/containers/kube-controller-manager/bb3632ea\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9e747d5c637f51047e37ce486d158585/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/
minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-836309","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9e747d5c637f51047e37ce486d158585","kubernetes.io/config.hash":"9e747d5c637f51047e37ce486d158585","kubernetes.io/config.seen":"2025-09-17T00:09:53.747007629Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.Ti
meoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306","pid":1458,"status":"running","bundle":"/run/containers/storage/overlay-containers/fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306/userdata","rootfs":"/var/lib/containers/storage/overlay/058b6e2357fba28a9d1164babfa22dcd06f3ba79e97d2cda44819e2eee4be067/merged","created":"2025-09-17T00:09:54.335628233Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\
\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:09:54.270151486Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-836309\",\"io.kubernetes.pod.namespace\":\"kube-system
\",\"io.kubernetes.pod.uid\":\"ecf1b2d1c2a0f49b55930852af1e133c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-836309_ecf1b2d1c2a0f49b55930852af1e133c/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/058b6e2357fba28a9d1164babfa22dcd06f3ba79e97d2cda44819e2eee4be067/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-836309_kube-system_ecf1b2d1c2a0f49b55930852af1e133c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c5ca55e367f9f4acc412276c733c009f12576d4624b822dda42ec4530cba32a8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c5ca55e367f9f4acc412276c733c009f12576d4624b822dda42ec4530cba32a8","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-836309_kube-system_ecf1b2d1c2a0f49b55930852af1e133c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes
.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ecf1b2d1c2a0f49b55930852af1e133c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ecf1b2d1c2a0f49b55930852af1e133c/containers/kube-scheduler/c5c4ea58\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-836309","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ecf1b2d1c2a0f49b55930852af1e133c","kubernetes.io/config.hash":"ecf1b2d1c2a0f49b55930852af1e133c","kubernetes.io/config.seen":"2025-09-17T00:09:53.747009027Z","kubernetes.io/config.source"
:"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f","pid":2196,"status":"running","bundle":"/run/containers/storage/overlay-containers/fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f/userdata","rootfs":"/var/lib/containers/storage/overlay/6431c100c558ffee4fc0d63b7f47f1d1dd77e132103be1de8438fe6f18b82b1f/merged","created":"2025-09-17T00:10:15.392915532Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.co
ntainer.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:10:15.347081538Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4148aae6-c97a-4dec-98b0-172efdad09fb\"}","io.kubernetes.cri-
o.LogPath":"/var/log/pods/kube-system_storage-provisioner_4148aae6-c97a-4dec-98b0-172efdad09fb/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6431c100c558ffee4fc0d63b7f47f1d1dd77e132103be1de8438fe6f18b82b1f/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_4148aae6-c97a-4dec-98b0-172efdad09fb_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9bd06274bf9f1709c5d3c25159ef30473a65626d4999976267a404195d04bb48/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9bd06274bf9f1709c5d3c25159ef30473a65626d4999976267a404195d04bb48","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_4148aae6-c97a-4dec-98b0-172efdad09fb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"cont
ainer_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4148aae6-c97a-4dec-98b0-172efdad09fb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4148aae6-c97a-4dec-98b0-172efdad09fb/containers/storage-provisioner/89adafc5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/4148aae6-c97a-4dec-98b0-172efdad09fb/volumes/kubernetes.io~projected/kube-api-access-9lvq2\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4148aae6-c97a-4dec-98b0-172efdad09fb","kubectl.kubernetes.io/last-applied
-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-09-17T00:10:14.994877345Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0917 00:10:36.793381  570486 cri.go:126] list returned 8 containers
	I0917 00:10:36.793405  570486 cri.go:129] container: {ID:2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485 Status:running}
	I0917 00:10:36.793459  570486 cri.go:135] skipping {2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485 running}: state = "running", want "paused"
	I0917 00:10:36.793472  570486 cri.go:129] container: {ID:43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330 Status:running}
	I0917 00:10:36.793476  570486 cri.go:135] skipping {43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330 running}: state = "running", want "paused"
	I0917 00:10:36.793480  570486 cri.go:129] container: {ID:66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d Status:running}
	I0917 00:10:36.793483  570486 cri.go:135] skipping {66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d running}: state = "running", want "paused"
	I0917 00:10:36.793486  570486 cri.go:129] container: {ID:94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177 Status:running}
	I0917 00:10:36.793489  570486 cri.go:135] skipping {94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177 running}: state = "running", want "paused"
	I0917 00:10:36.793491  570486 cri.go:129] container: {ID:d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498 Status:running}
	I0917 00:10:36.793494  570486 cri.go:135] skipping {d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498 running}: state = "running", want "paused"
	I0917 00:10:36.793496  570486 cri.go:129] container: {ID:e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b Status:running}
	I0917 00:10:36.793498  570486 cri.go:135] skipping {e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b running}: state = "running", want "paused"
	I0917 00:10:36.793500  570486 cri.go:129] container: {ID:fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306 Status:running}
	I0917 00:10:36.793503  570486 cri.go:135] skipping {fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306 running}: state = "running", want "paused"
	I0917 00:10:36.793507  570486 cri.go:129] container: {ID:fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f Status:running}
	I0917 00:10:36.793509  570486 cri.go:135] skipping {fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f running}: state = "running", want "paused"
	I0917 00:10:36.793553  570486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:10:36.803655  570486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:10:36.803666  570486 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:10:36.803710  570486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:10:36.813281  570486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:10:36.813810  570486 kubeconfig.go:125] found "functional-836309" server: "https://192.168.49.2:8441"
	I0917 00:10:36.814973  570486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:10:36.824747  570486 kubeadm.go:636] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-09-17 00:09:49.486353269 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-09-17 00:10:36.119406201 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I0917 00:10:36.824760  570486 kubeadm.go:1152] stopping kube-system containers ...
	I0917 00:10:36.824776  570486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 00:10:36.824832  570486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:10:36.862861  570486 cri.go:89] found id: "43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330"
	I0917 00:10:36.862874  570486 cri.go:89] found id: "fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f"
	I0917 00:10:36.862878  570486 cri.go:89] found id: "94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177"
	I0917 00:10:36.862880  570486 cri.go:89] found id: "2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485"
	I0917 00:10:36.862882  570486 cri.go:89] found id: "e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b"
	I0917 00:10:36.862884  570486 cri.go:89] found id: "fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306"
	I0917 00:10:36.862885  570486 cri.go:89] found id: "66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d"
	I0917 00:10:36.862887  570486 cri.go:89] found id: "d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498"
	I0917 00:10:36.862889  570486 cri.go:89] found id: ""
	I0917 00:10:36.862894  570486 cri.go:252] Stopping containers: [43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330 fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f 94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177 2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485 e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306 66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498]
	I0917 00:10:36.862945  570486 ssh_runner.go:195] Run: which crictl
	I0917 00:10:36.867026  570486 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330 fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f 94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177 2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485 e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306 66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498
	I0917 00:11:00.002990  570486 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330 fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f 94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177 2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485 e07f8f9f5c0fd9fc2a846a6aa616e43475343e86bfb9b1f1d885429905018f2b fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306 66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d d76ec241bed3bcb57087cc1c5058989fed0c0f9f2a54e1df71b62af8dd654498: (23.135928626s)
	I0917 00:11:00.003058  570486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 00:11:00.049944  570486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:11:00.060114  570486 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Sep 17 00:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Sep 17 00:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Sep 17 00:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Sep 17 00:09 /etc/kubernetes/scheduler.conf
	
	I0917 00:11:00.060172  570486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0917 00:11:00.069789  570486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0917 00:11:00.079752  570486 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:00.079815  570486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:11:00.089062  570486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0917 00:11:00.098893  570486 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:00.098941  570486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:11:00.108278  570486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0917 00:11:00.117779  570486 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:11:00.117831  570486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:11:00.127657  570486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:11:00.137408  570486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 00:11:00.181547  570486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 00:11:01.249708  570486 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.06813424s)
	I0917 00:11:01.249727  570486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 00:11:01.442617  570486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 00:11:01.501965  570486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 00:11:01.563464  570486 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:11:01.563533  570486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:02.064413  570486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:02.080368  570486 api_server.go:72] duration metric: took 516.900656ms to wait for apiserver process to appear ...
	I0917 00:11:02.080400  570486 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:11:02.080434  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:02.080844  570486 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0917 00:11:02.580551  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:03.389673  570486 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 00:11:03.389692  570486 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 00:11:03.389708  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:03.395495  570486 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:11:03.395514  570486 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:11:03.580911  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:03.586488  570486 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:11:03.586522  570486 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:11:04.081249  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:04.085853  570486 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:11:04.085872  570486 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:11:04.580484  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:04.584918  570486 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 00:11:04.584938  570486 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 00:11:05.080597  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:05.085331  570486 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0917 00:11:05.091874  570486 api_server.go:141] control plane version: v1.34.0
	I0917 00:11:05.091896  570486 api_server.go:131] duration metric: took 3.011488947s to wait for apiserver health ...
	I0917 00:11:05.091909  570486 cni.go:84] Creating CNI manager for ""
	I0917 00:11:05.091915  570486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:11:05.094272  570486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 00:11:05.095779  570486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 00:11:05.100478  570486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 00:11:05.100489  570486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 00:11:05.120436  570486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 00:11:05.446910  570486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:11:05.450372  570486 system_pods.go:59] 8 kube-system pods found
	I0917 00:11:05.450425  570486 system_pods.go:61] "coredns-66bc5c9577-zvmqf" [63ff7599-3b25-4ca5-846c-232621ea9f1e] Running
	I0917 00:11:05.450435  570486 system_pods.go:61] "etcd-functional-836309" [e7733d33-ceb2-46d0-be75-c8b4572e46f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:11:05.450440  570486 system_pods.go:61] "kindnet-h2rjf" [7138d7c0-f231-4d14-b296-954bc2c7b30f] Running
	I0917 00:11:05.450447  570486 system_pods.go:61] "kube-apiserver-functional-836309" [9f105fea-0df0-492d-9ad3-83e40a67a0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:11:05.450455  570486 system_pods.go:61] "kube-controller-manager-functional-836309" [e589107d-9e3a-4e14-8080-772131e8b8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:11:05.450462  570486 system_pods.go:61] "kube-proxy-cbvjf" [4e12d004-8422-442f-89a4-6455461dbebc] Running
	I0917 00:11:05.450474  570486 system_pods.go:61] "kube-scheduler-functional-836309" [3273d143-c5c6-4ecd-8938-e7912677fb5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:11:05.450481  570486 system_pods.go:61] "storage-provisioner" [4148aae6-c97a-4dec-98b0-172efdad09fb] Running
	I0917 00:11:05.450488  570486 system_pods.go:74] duration metric: took 3.563543ms to wait for pod list to return data ...
	I0917 00:11:05.450497  570486 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:11:05.453309  570486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:11:05.453325  570486 node_conditions.go:123] node cpu capacity is 8
	I0917 00:11:05.453341  570486 node_conditions.go:105] duration metric: took 2.840738ms to run NodePressure ...
	I0917 00:11:05.453357  570486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 00:11:05.710696  570486 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0917 00:11:05.713878  570486 kubeadm.go:735] kubelet initialised
	I0917 00:11:05.713890  570486 kubeadm.go:736] duration metric: took 3.178493ms waiting for restarted kubelet to initialise ...
	I0917 00:11:05.713906  570486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:11:05.726183  570486 ops.go:34] apiserver oom_adj: -16
	I0917 00:11:05.726199  570486 kubeadm.go:593] duration metric: took 28.922527887s to restartPrimaryControlPlane
	I0917 00:11:05.726210  570486 kubeadm.go:394] duration metric: took 28.994682097s to StartCluster
	I0917 00:11:05.726242  570486 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:11:05.726321  570486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:11:05.727044  570486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:11:05.727363  570486 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:11:05.727432  570486 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:11:05.727543  570486 addons.go:69] Setting storage-provisioner=true in profile "functional-836309"
	I0917 00:11:05.727558  570486 addons.go:238] Setting addon storage-provisioner=true in "functional-836309"
	W0917 00:11:05.727564  570486 addons.go:247] addon storage-provisioner should already be in state true
	I0917 00:11:05.727571  570486 addons.go:69] Setting default-storageclass=true in profile "functional-836309"
	I0917 00:11:05.727587  570486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-836309"
	I0917 00:11:05.727596  570486 host.go:66] Checking if "functional-836309" exists ...
	I0917 00:11:05.727611  570486 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:11:05.727972  570486 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
	I0917 00:11:05.728125  570486 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
	I0917 00:11:05.732717  570486 out.go:179] * Verifying Kubernetes components...
	I0917 00:11:05.733693  570486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:11:05.753926  570486 addons.go:238] Setting addon default-storageclass=true in "functional-836309"
	W0917 00:11:05.753943  570486 addons.go:247] addon default-storageclass should already be in state true
	I0917 00:11:05.753977  570486 host.go:66] Checking if "functional-836309" exists ...
	I0917 00:11:05.754556  570486 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
	I0917 00:11:05.755127  570486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:11:05.757661  570486 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:11:05.757674  570486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:11:05.757736  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:11:05.783934  570486 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:11:05.783952  570486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:11:05.784010  570486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
	I0917 00:11:05.788379  570486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
	I0917 00:11:05.807971  570486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
	I0917 00:11:05.891667  570486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:11:05.907120  570486 node_ready.go:35] waiting up to 6m0s for node "functional-836309" to be "Ready" ...
	I0917 00:11:05.907802  570486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:11:05.910037  570486 node_ready.go:49] node "functional-836309" is "Ready"
	I0917 00:11:05.910051  570486 node_ready.go:38] duration metric: took 2.90506ms for node "functional-836309" to be "Ready" ...
	I0917 00:11:05.910093  570486 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:11:05.910130  570486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:11:05.922925  570486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:11:06.425829  570486 api_server.go:72] duration metric: took 698.412239ms to wait for apiserver process to appear ...
	I0917 00:11:06.425846  570486 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:11:06.425864  570486 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0917 00:11:06.431106  570486 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0917 00:11:06.432038  570486 api_server.go:141] control plane version: v1.34.0
	I0917 00:11:06.432054  570486 api_server.go:131] duration metric: took 6.202527ms to wait for apiserver health ...
	I0917 00:11:06.432062  570486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:11:06.433513  570486 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:11:06.435761  570486 addons.go:514] duration metric: took 708.335266ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:11:06.435804  570486 system_pods.go:59] 8 kube-system pods found
	I0917 00:11:06.435830  570486 system_pods.go:61] "coredns-66bc5c9577-zvmqf" [63ff7599-3b25-4ca5-846c-232621ea9f1e] Running
	I0917 00:11:06.435849  570486 system_pods.go:61] "etcd-functional-836309" [e7733d33-ceb2-46d0-be75-c8b4572e46f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:11:06.435854  570486 system_pods.go:61] "kindnet-h2rjf" [7138d7c0-f231-4d14-b296-954bc2c7b30f] Running
	I0917 00:11:06.435863  570486 system_pods.go:61] "kube-apiserver-functional-836309" [9f105fea-0df0-492d-9ad3-83e40a67a0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:11:06.435869  570486 system_pods.go:61] "kube-controller-manager-functional-836309" [e589107d-9e3a-4e14-8080-772131e8b8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:11:06.435872  570486 system_pods.go:61] "kube-proxy-cbvjf" [4e12d004-8422-442f-89a4-6455461dbebc] Running
	I0917 00:11:06.435878  570486 system_pods.go:61] "kube-scheduler-functional-836309" [3273d143-c5c6-4ecd-8938-e7912677fb5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:11:06.435902  570486 system_pods.go:61] "storage-provisioner" [4148aae6-c97a-4dec-98b0-172efdad09fb] Running
	I0917 00:11:06.435910  570486 system_pods.go:74] duration metric: took 3.842414ms to wait for pod list to return data ...
	I0917 00:11:06.435919  570486 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:11:06.439229  570486 default_sa.go:45] found service account: "default"
	I0917 00:11:06.439246  570486 default_sa.go:55] duration metric: took 3.321781ms for default service account to be created ...
	I0917 00:11:06.439255  570486 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:11:06.442978  570486 system_pods.go:86] 8 kube-system pods found
	I0917 00:11:06.443000  570486 system_pods.go:89] "coredns-66bc5c9577-zvmqf" [63ff7599-3b25-4ca5-846c-232621ea9f1e] Running
	I0917 00:11:06.443008  570486 system_pods.go:89] "etcd-functional-836309" [e7733d33-ceb2-46d0-be75-c8b4572e46f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:11:06.443014  570486 system_pods.go:89] "kindnet-h2rjf" [7138d7c0-f231-4d14-b296-954bc2c7b30f] Running
	I0917 00:11:06.443024  570486 system_pods.go:89] "kube-apiserver-functional-836309" [9f105fea-0df0-492d-9ad3-83e40a67a0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:11:06.443030  570486 system_pods.go:89] "kube-controller-manager-functional-836309" [e589107d-9e3a-4e14-8080-772131e8b8ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:11:06.443035  570486 system_pods.go:89] "kube-proxy-cbvjf" [4e12d004-8422-442f-89a4-6455461dbebc] Running
	I0917 00:11:06.443050  570486 system_pods.go:89] "kube-scheduler-functional-836309" [3273d143-c5c6-4ecd-8938-e7912677fb5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:11:06.443054  570486 system_pods.go:89] "storage-provisioner" [4148aae6-c97a-4dec-98b0-172efdad09fb] Running
	I0917 00:11:06.443062  570486 system_pods.go:126] duration metric: took 3.802457ms to wait for k8s-apps to be running ...
	I0917 00:11:06.443071  570486 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:11:06.443128  570486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:11:06.456703  570486 system_svc.go:56] duration metric: took 13.618599ms WaitForService to wait for kubelet
	I0917 00:11:06.456729  570486 kubeadm.go:578] duration metric: took 729.315846ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:11:06.456752  570486 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:11:06.459661  570486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:11:06.459675  570486 node_conditions.go:123] node cpu capacity is 8
	I0917 00:11:06.459687  570486 node_conditions.go:105] duration metric: took 2.930757ms to run NodePressure ...
	I0917 00:11:06.459699  570486 start.go:241] waiting for startup goroutines ...
	I0917 00:11:06.459704  570486 start.go:246] waiting for cluster config update ...
	I0917 00:11:06.459713  570486 start.go:255] writing updated cluster config ...
	I0917 00:11:06.459984  570486 ssh_runner.go:195] Run: rm -f paused
	I0917 00:11:06.464025  570486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:11:06.467200  570486 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zvmqf" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:06.471672  570486 pod_ready.go:94] pod "coredns-66bc5c9577-zvmqf" is "Ready"
	I0917 00:11:06.471687  570486 pod_ready.go:86] duration metric: took 4.473737ms for pod "coredns-66bc5c9577-zvmqf" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:06.473558  570486 pod_ready.go:83] waiting for pod "etcd-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:07.979913  570486 pod_ready.go:94] pod "etcd-functional-836309" is "Ready"
	I0917 00:11:07.979931  570486 pod_ready.go:86] duration metric: took 1.506360539s for pod "etcd-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:07.982234  570486 pod_ready.go:83] waiting for pod "kube-apiserver-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 00:11:09.988039  570486 pod_ready.go:104] pod "kube-apiserver-functional-836309" is not "Ready", error: <nil>
	W0917 00:11:12.487540  570486 pod_ready.go:104] pod "kube-apiserver-functional-836309" is not "Ready", error: <nil>
	W0917 00:11:14.488491  570486 pod_ready.go:104] pod "kube-apiserver-functional-836309" is not "Ready", error: <nil>
	I0917 00:11:15.988218  570486 pod_ready.go:94] pod "kube-apiserver-functional-836309" is "Ready"
	I0917 00:11:15.988235  570486 pod_ready.go:86] duration metric: took 8.005987167s for pod "kube-apiserver-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:15.990440  570486 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:17.496732  570486 pod_ready.go:94] pod "kube-controller-manager-functional-836309" is "Ready"
	I0917 00:11:17.496754  570486 pod_ready.go:86] duration metric: took 1.506298682s for pod "kube-controller-manager-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:17.499531  570486 pod_ready.go:83] waiting for pod "kube-proxy-cbvjf" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:17.504481  570486 pod_ready.go:94] pod "kube-proxy-cbvjf" is "Ready"
	I0917 00:11:17.504504  570486 pod_ready.go:86] duration metric: took 4.957572ms for pod "kube-proxy-cbvjf" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:17.507206  570486 pod_ready.go:83] waiting for pod "kube-scheduler-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:17.511593  570486 pod_ready.go:94] pod "kube-scheduler-functional-836309" is "Ready"
	I0917 00:11:17.511609  570486 pod_ready.go:86] duration metric: took 4.389739ms for pod "kube-scheduler-functional-836309" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:11:17.511620  570486 pod_ready.go:40] duration metric: took 11.047572236s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:11:17.561567  570486 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:11:17.563642  570486 out.go:179] * Done! kubectl is now configured to use "functional-836309" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:16:00 functional-836309 crio[4225]: time="2025-09-17 00:16:00.847943356Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=5bcd865e-78fd-485b-94cb-1f4e159a1458 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:16:00 functional-836309 crio[4225]: time="2025-09-17 00:16:00.853403376Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 17 00:16:13 functional-836309 crio[4225]: time="2025-09-17 00:16:13.538649766Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=287535cc-c638-49dd-ade6-ca63b15559c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:13 functional-836309 crio[4225]: time="2025-09-17 00:16:13.538951175Z" level=info msg="Image docker.io/nginx:alpine not found" id=287535cc-c638-49dd-ade6-ca63b15559c0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:28 functional-836309 crio[4225]: time="2025-09-17 00:16:28.538301918Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=af578cb2-8f96-4dbe-892b-84fcd9626f76 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:28 functional-836309 crio[4225]: time="2025-09-17 00:16:28.538632455Z" level=info msg="Image docker.io/nginx:alpine not found" id=af578cb2-8f96-4dbe-892b-84fcd9626f76 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:30 functional-836309 crio[4225]: time="2025-09-17 00:16:30.951321329Z" level=info msg="Pulling image: docker.io/nginx:latest" id=65ed5ecf-36ff-4565-afd8-f47144caa049 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:16:30 functional-836309 crio[4225]: time="2025-09-17 00:16:30.952886844Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 17 00:16:39 functional-836309 crio[4225]: time="2025-09-17 00:16:39.539673181Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=41be9a9f-1d44-44be-8107-3513165df61c name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:39 functional-836309 crio[4225]: time="2025-09-17 00:16:39.539885690Z" level=info msg="Image docker.io/nginx:alpine not found" id=41be9a9f-1d44-44be-8107-3513165df61c name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:42 functional-836309 crio[4225]: time="2025-09-17 00:16:42.538067859Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=5a01c847-a5f4-4c3e-b495-7eb00fd502a7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:42 functional-836309 crio[4225]: time="2025-09-17 00:16:42.538323454Z" level=info msg="Image docker.io/mysql:5.7 not found" id=5a01c847-a5f4-4c3e-b495-7eb00fd502a7 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:50 functional-836309 crio[4225]: time="2025-09-17 00:16:50.538606869Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=cc38a2af-4665-4808-9325-efe1cfb1e222 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:50 functional-836309 crio[4225]: time="2025-09-17 00:16:50.538897483Z" level=info msg="Image docker.io/nginx:alpine not found" id=cc38a2af-4665-4808-9325-efe1cfb1e222 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:53 functional-836309 crio[4225]: time="2025-09-17 00:16:53.538753872Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=7e2826c3-8b4a-4614-80a9-02d57c6052b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:16:53 functional-836309 crio[4225]: time="2025-09-17 00:16:53.539010791Z" level=info msg="Image docker.io/mysql:5.7 not found" id=7e2826c3-8b4a-4614-80a9-02d57c6052b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:17:01 functional-836309 crio[4225]: time="2025-09-17 00:17:01.047903278Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7cfd1f2f-aef2-408f-bfda-95765546a957 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:17:01 functional-836309 crio[4225]: time="2025-09-17 00:17:01.048702457Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=e624e925-8f45-45e8-992c-777b1d3b7820 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:17:01 functional-836309 crio[4225]: time="2025-09-17 00:17:01.056593414Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 17 00:17:08 functional-836309 crio[4225]: time="2025-09-17 00:17:08.538813488Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=ab14bb20-b1b3-4bcf-aa73-5286c1ffbaa0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:17:08 functional-836309 crio[4225]: time="2025-09-17 00:17:08.539026257Z" level=info msg="Image docker.io/mysql:5.7 not found" id=ab14bb20-b1b3-4bcf-aa73-5286c1ffbaa0 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:17:19 functional-836309 crio[4225]: time="2025-09-17 00:17:19.538192809Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=895df419-74ad-4137-bdb0-68d32a8d8bb6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:17:19 functional-836309 crio[4225]: time="2025-09-17 00:17:19.538446611Z" level=info msg="Image docker.io/mysql:5.7 not found" id=895df419-74ad-4137-bdb0-68d32a8d8bb6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:17:31 functional-836309 crio[4225]: time="2025-09-17 00:17:31.538907778Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=083f15e0-74cb-4032-bfc0-b8f6a556e36e name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:17:31 functional-836309 crio[4225]: time="2025-09-17 00:17:31.539187160Z" level=info msg="Image docker.io/mysql:5.7 not found" id=083f15e0-74cb-4032-bfc0-b8f6a556e36e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb474edf243b1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   d689b11bc9243       busybox-mount
	9f2aad7cc830a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      6 minutes ago       Running             kube-apiserver            0                   cb31a6d151f18       kube-apiserver-functional-836309
	8fc6aae6af439       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      6 minutes ago       Running             kube-controller-manager   2                   073e9000e2cbd       kube-controller-manager-functional-836309
	a14ceabc188eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      1                   bd997b17bb8d3       etcd-functional-836309
	888d62ee0b634       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      6 minutes ago       Exited              kube-controller-manager   1                   073e9000e2cbd       kube-controller-manager-functional-836309
	c06f60831d1a2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      6 minutes ago       Running             kube-proxy                1                   04529c3273474       kube-proxy-cbvjf
	64858777ddc03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      6 minutes ago       Running             kindnet-cni               1                   e619e5a0562ff       kindnet-h2rjf
	8414e6a217a0a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      6 minutes ago       Running             kube-scheduler            1                   c5ca55e367f9f       kube-scheduler-functional-836309
	8750ce41941ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   9bd06274bf9f1       storage-provisioner
	9d874bdc79320       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   1                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	43960daf0ceb5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	fee9c2e341d4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   9bd06274bf9f1       storage-provisioner
	94e0331fcf046       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Exited              kindnet-cni               0                   e619e5a0562ff       kindnet-h2rjf
	2590bb5313e64       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      7 minutes ago       Exited              kube-proxy                0                   04529c3273474       kube-proxy-cbvjf
	fd4423f996e17       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      7 minutes ago       Exited              kube-scheduler            0                   c5ca55e367f9f       kube-scheduler-functional-836309
	66e1997c75a09       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      0                   bd997b17bb8d3       etcd-functional-836309
	
	
	==> coredns [43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58276 - 22452 "HINFO IN 7807615287491316741.4205491171577213210. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036670075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d874bdc7932076f658b9567185beccffdb2e85d489d293dfe85e3e619013c1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34900 - 1175 "HINFO IN 6559932629016620651.4444246566734803126. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054012876s
	
	
	==> describe nodes <==
	Name:               functional-836309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-836309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=functional-836309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_09_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:09:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-836309
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:17:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:16:09 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:16:09 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:16:09 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:16:09 +0000   Wed, 17 Sep 2025 00:10:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-836309
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f7de0bcecd43499ea9b16c8c00a864
	  System UUID:                e097105d-a213-4ebf-95fe-cce4cad422c0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-m76kz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  default                     mysql-5bb876957f-l9pq7                       600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m12s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-zvmqf                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m35s
	  kube-system                 etcd-functional-836309                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m41s
	  kube-system                 kindnet-h2rjf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m36s
	  kube-system                 kube-apiserver-functional-836309             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-controller-manager-functional-836309    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-proxy-cbvjf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	  kube-system                 kube-scheduler-functional-836309             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m34s                  kube-proxy       
	  Normal  Starting                 6m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m46s (x8 over 7m46s)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m46s (x8 over 7m46s)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m46s (x8 over 7m46s)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     7m41s                  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m41s                  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s                  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m41s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m37s                  node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	  Normal  NodeReady                7m25s                  kubelet          Node functional-836309 status is now: NodeReady
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s (x8 over 6m38s)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x8 over 6m38s)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x8 over 6m38s)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m33s                  node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d] <==
	{"level":"warn","ts":"2025-09-17T00:09:55.212910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.220259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.227159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.234529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.243853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.251054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.257902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:10:42.783237Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:10:42.783351Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-17T00:10:42.783494Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785304Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785881Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785904Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785429Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785929Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785969Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.785982Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.788632Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:10:49.788702Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.788727Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:10:49.788733Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a14ceabc188ebbf10535dda7c1f798592d2e79e03743ad28e2bd444ce75333ba] <==
	{"level":"warn","ts":"2025-09-17T00:11:02.753921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.761109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.770524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.777282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.783883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.791702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.799199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.806444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.812694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.819824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.828034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.834969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.841980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.849538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.863753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.870164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.878044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.884349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.890622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.898140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.905536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.912507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.926102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.939007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.982536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:17:39 up  3:00,  0 users,  load average: 0.76, 0.62, 11.59
	Linux functional-836309 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [64858777ddc0357994b52a6fd8bf79dba5ac39143453505e0f08e2a242aecae8] <==
	I0917 00:15:33.718519       1 main.go:301] handling current node
	I0917 00:15:43.716710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:15:43.716773       1 main.go:301] handling current node
	I0917 00:15:53.725519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:15:53.725570       1 main.go:301] handling current node
	I0917 00:16:03.721634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:16:03.721685       1 main.go:301] handling current node
	I0917 00:16:13.716655       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:16:13.716712       1 main.go:301] handling current node
	I0917 00:16:23.718574       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:16:23.718633       1 main.go:301] handling current node
	I0917 00:16:33.716895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:16:33.716944       1 main.go:301] handling current node
	I0917 00:16:43.716547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:16:43.716582       1 main.go:301] handling current node
	I0917 00:16:53.716285       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:16:53.716324       1 main.go:301] handling current node
	I0917 00:17:03.724985       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:17:03.725020       1 main.go:301] handling current node
	I0917 00:17:13.716633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:17:13.716694       1 main.go:301] handling current node
	I0917 00:17:23.717146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:17:23.717190       1 main.go:301] handling current node
	I0917 00:17:33.722998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:17:33.723073       1 main.go:301] handling current node
	
	
	==> kindnet [94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177] <==
	I0917 00:10:04.407562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0917 00:10:04.407829       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0917 00:10:04.407974       1 main.go:148] setting mtu 1500 for CNI 
	I0917 00:10:04.407992       1 main.go:178] kindnetd IP family: "ipv4"
	I0917 00:10:04.408041       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-17T00:10:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0917 00:10:04.608241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0917 00:10:04.608325       1 controller.go:381] "Waiting for informer caches to sync"
	I0917 00:10:04.608338       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0917 00:10:04.608850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0917 00:10:05.008798       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0917 00:10:05.008823       1 metrics.go:72] Registering metrics
	I0917 00:10:05.008870       1 controller.go:711] "Syncing nftables rules"
	I0917 00:10:14.613627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:14.613697       1 main.go:301] handling current node
	I0917 00:10:24.615570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:24.615608       1 main.go:301] handling current node
	I0917 00:10:34.612524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:34.612559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f2aad7cc830a3ec57ba1b3d2cd335c4f402ff995fba44cd8dd9944ea36855bb] <==
	I0917 00:11:03.464074       1 policy_source.go:240] refreshing policies
	I0917 00:11:03.486274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:11:03.584916       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:11:04.356343       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 00:11:04.664177       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0917 00:11:04.665542       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:11:04.672449       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 00:11:05.439194       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:11:05.552856       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:11:05.620208       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 00:11:05.627889       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 00:11:07.148642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:11:20.918862       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.144.51"}
	I0917 00:11:25.122372       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.76.119"}
	I0917 00:11:27.305503       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.76.206"}
	I0917 00:12:08.543295       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.9.127"}
	I0917 00:12:16.483422       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:26.931027       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:26.407498       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:39.931798       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:38.574091       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:09.009462       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:47.255026       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:24.666189       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:12.893722       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [888d62ee0b634c673d1878ce150c6f0034e298592a41de5b4a133d003db1a139] <==
	I0917 00:10:43.989698       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:44.308654       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0917 00:10:44.308686       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:44.310251       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 00:10:44.310301       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:10:44.310653       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0917 00:10:44.310800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 00:10:56.321004       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8fc6aae6af439080e3411b9cb8143eddc1da6c5a6e3211c2a191a3dbfa865ca9] <==
	I0917 00:11:06.761652       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:11:06.764968       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0917 00:11:06.767367       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:11:06.772631       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:11:06.777964       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:11:06.780142       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:11:06.793750       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:11:06.793797       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0917 00:11:06.795031       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0917 00:11:06.795086       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:11:06.795122       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:11:06.795131       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:11:06.795137       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:11:06.795175       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:11:06.795208       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:11:06.797152       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:11:06.798500       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:11:06.800827       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:11:06.800851       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:11:06.800859       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:11:06.800834       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:11:06.803222       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:11:06.805177       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:11:06.807633       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:11:06.816406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485] <==
	I0917 00:10:04.193311       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:04.263769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:04.364709       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:04.364767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:04.364855       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:04.385096       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:04.385159       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:04.390876       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:04.391486       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:04.391511       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:04.393121       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:04.393158       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:04.393167       1 config.go:200] "Starting service config controller"
	I0917 00:10:04.393187       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:04.393201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:04.393189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:04.393246       1 config.go:309] "Starting node config controller"
	I0917 00:10:04.393260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:04.493462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:04.493439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c06f60831d1a27beead1133ee09bd56597eea7ed1a44bd377eb0a2445447cee8] <==
	I0917 00:10:43.389590       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:43.460712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:43.561820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:43.561866       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:43.561957       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:43.585276       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:43.585350       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:43.590785       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:43.591164       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:43.591200       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:43.593011       1 config.go:200] "Starting service config controller"
	I0917 00:10:43.593356       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:43.593113       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:43.593126       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:43.593435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:43.593437       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:43.593165       1 config.go:309] "Starting node config controller"
	I0917 00:10:43.593494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:43.593503       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:43.693526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:43.693578       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:10:43.693636       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8414e6a217a0a65711aa4a8781ace6ed51c30407bf0166b9c4024dad4b506e9c] <==
	I0917 00:10:44.134044       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:51.491622       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:10:51.491651       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:51.496210       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496222       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:10:51.496254       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496251       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496256       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.496635       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:10:51.496706       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:10:51.596824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.597020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.597094       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:11:03.387571       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:11:03.387692       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:11:03.387722       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:11:03.387745       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:11:03.387764       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:11:03.387800       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	
	
	==> kube-scheduler [fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306] <==
	E0917 00:09:56.365834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:09:56.365882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:09:56.366012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:09:56.366067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:09:56.366104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0917 00:09:56.366185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:09:56.366277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:09:56.366176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0917 00:09:56.366533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:09:56.366612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:09:56.366642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:09:56.366681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:09:56.366732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:09:56.366735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0917 00:09:56.366804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:09:56.366825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0917 00:09:56.366896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0917 00:09:56.366939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0917 00:09:57.962884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.641974       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:10:42.642087       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.642285       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:10:42.642311       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:10:42.642328       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:10:42.642359       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:17:01 functional-836309 kubelet[5462]: E0917 00:17:01.047837    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:17:01 functional-836309 kubelet[5462]: E0917 00:17:01.048290    5462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 17 00:17:01 functional-836309 kubelet[5462]: E0917 00:17:01.048334    5462 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 17 00:17:01 functional-836309 kubelet[5462]: E0917 00:17:01.048502    5462 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-m76kz_default(de55227f-8aa8-49c2-b1dc-b0517b716b2d): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 17 00:17:01 functional-836309 kubelet[5462]: E0917 00:17:01.049855    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:17:01 functional-836309 kubelet[5462]: E0917 00:17:01.605341    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068221605081305  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:01 functional-836309 kubelet[5462]: E0917 00:17:01.605384    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068221605081305  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:08 functional-836309 kubelet[5462]: E0917 00:17:08.539360    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:17:11 functional-836309 kubelet[5462]: E0917 00:17:11.607115    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068231606796233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:11 functional-836309 kubelet[5462]: E0917 00:17:11.607153    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068231606796233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:12 functional-836309 kubelet[5462]: E0917 00:17:12.538734    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:17:13 functional-836309 kubelet[5462]: E0917 00:17:13.538782    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:17:19 functional-836309 kubelet[5462]: E0917 00:17:19.538864    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:17:21 functional-836309 kubelet[5462]: E0917 00:17:21.608867    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068241608617681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:21 functional-836309 kubelet[5462]: E0917 00:17:21.608904    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068241608617681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:24 functional-836309 kubelet[5462]: E0917 00:17:24.538382    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:17:26 functional-836309 kubelet[5462]: E0917 00:17:26.537846    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:17:31 functional-836309 kubelet[5462]: E0917 00:17:31.150006    5462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 00:17:31 functional-836309 kubelet[5462]: E0917 00:17:31.150103    5462 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 00:17:31 functional-836309 kubelet[5462]: E0917 00:17:31.150230    5462 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(54252b1b-51bf-4359-848b-6b08a8f68dcd): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:17:31 functional-836309 kubelet[5462]: E0917 00:17:31.150280    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:17:31 functional-836309 kubelet[5462]: E0917 00:17:31.539469    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:17:31 functional-836309 kubelet[5462]: E0917 00:17:31.610990    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068251610657192  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:31 functional-836309 kubelet[5462]: E0917 00:17:31.611024    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068251610657192  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:17:36 functional-836309 kubelet[5462]: E0917 00:17:36.538145    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	
	
	==> storage-provisioner [8750ce41941ba15a9b4b2e19cfe5128979331c1400a49209e1f4efb5b1318340] <==
	W0917 00:17:15.077271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:17.080186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:17.084528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:19.088130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:19.092348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:21.095646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:21.102270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:23.105910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:23.110346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:25.113701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:25.119659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:27.123136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:27.128328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:29.131493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:29.137051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:31.140114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:31.146048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:33.150060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:33.154080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:35.158312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:35.162902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:37.165998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:37.170461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:39.173920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:39.178113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f] <==
	W0917 00:10:17.462196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.465941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.471590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.475172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.479508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.483478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.491192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.495638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.501797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.506026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.512276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.515329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.519407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.522663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.529122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.532130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.536263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.539874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.544694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.549064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.553478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.557571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.563110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.566878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.571434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
helpers_test.go:269: (dbg) Run:  kubectl --context functional-836309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-m76kz mysql-5bb876957f-l9pq7 nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz mysql-5bb876957f-l9pq7 nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz mysql-5bb876957f-l9pq7 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://cb474edf243b1a8e4e93b368e7e6be5f76c0c8b839e74e1c49c1a7bff20a0680
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Sep 2025 00:12:00 +0000
	      Finished:     Wed, 17 Sep 2025 00:12:00 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvp4d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zvp4d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m8s   default-scheduler  Successfully assigned default/busybox-mount to functional-836309
	  Normal  Pulling    6m9s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m40s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.264s (28.084s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m40s  kubelet            Created container: mount-munger
	  Normal  Started    5m40s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-m76kz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:25 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4fhc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c4fhc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m15s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-m76kz to functional-836309
	  Normal   Pulling    69s (x5 over 6m15s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     39s (x5 over 6m15s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     39s (x5 over 6m15s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x13 over 6m15s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     14s (x13 over 6m15s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-l9pq7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-76bnk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-76bnk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m12s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-l9pq7 to functional-836309
	  Normal   Pulling    110s (x4 over 6m13s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     70s (x4 over 5m43s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     70s (x4 over 5m43s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x10 over 5m42s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     9s (x10 over 5m42s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:12:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2v8fx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2v8fx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m31s                default-scheduler  Successfully assigned default/nginx-svc to functional-836309
	  Normal   BackOff    61s (x5 over 4m40s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     61s (x5 over 4m40s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    50s (x4 over 5m32s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9s (x4 over 4m40s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     9s (x4 over 4m40s)   kubelet            Error: ErrImagePull
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:37 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85lfd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-85lfd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-836309
	  Normal   Pulling    75s (x4 over 6m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     39s (x4 over 5m10s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     39s (x4 over 5m10s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x8 over 5m10s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4s (x8 over 5m10s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-836309 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-l9pq7" [a1c1727d-2e60-4a98-8ae8-aa7319d47aed] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-17 00:21:27.680416511 +0000 UTC m=+1995.147760083
functional_test.go:1804: (dbg) Run:  kubectl --context functional-836309 describe po mysql-5bb876957f-l9pq7 -n default
functional_test.go:1804: (dbg) kubectl --context functional-836309 describe po mysql-5bb876957f-l9pq7 -n default:
Name:             mysql-5bb876957f-l9pq7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-836309/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:11:27 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-76bnk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-76bnk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-l9pq7 to functional-836309
Normal   Pulling    3m30s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     2m15s (x5 over 9m30s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m15s (x5 over 9m30s)  kubelet            Error: ErrImagePull
Warning  Failed     70s (x16 over 9m29s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    8s (x21 over 9m29s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-836309 logs mysql-5bb876957f-l9pq7 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-836309 logs mysql-5bb876957f-l9pq7 -n default: exit status 1 (68.022635ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-l9pq7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-836309 logs mysql-5bb876957f-l9pq7 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-836309
helpers_test.go:243: (dbg) docker inspect functional-836309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	        "Created": "2025-09-17T00:09:44.133139993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564972,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:09:44.169133569Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/hosts",
	        "LogPath": "/var/lib/docker/containers/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5/3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5-json.log",
	        "Name": "/functional-836309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-836309:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-836309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ec3e877de9bf8536e2c32a388cdb6fa3b2b7f148ceb5c097e8ab397f71a10f5",
	                "LowerDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de2b96e7bc9a2a6ce5c4debfc0e842c0965361244c0995ec8ded64beb49c8264/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-836309",
	                "Source": "/var/lib/docker/volumes/functional-836309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-836309",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-836309",
	                "name.minikube.sigs.k8s.io": "functional-836309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23448c026a24457ded735e88238de72a95f1b2d956a93efb7f9494b958befb64",
	            "SandboxKey": "/var/run/docker/netns/23448c026a24",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-836309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:01:e3:2b:98:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f11c0adeed5b0a571ce66bcfa96404e5751f9da2bd5366531798e16160202bd2",
	                    "EndpointID": "47b04d28f82bdaef821c6f0a8dc045f3604bb616ac73b4ea262d9bb6aa905794",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-836309",
	                        "3ec3e877de9b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-836309 -n functional-836309
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 logs -n 25: (1.552342192s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-836309 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh       │ functional-836309 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh       │ functional-836309 ssh -- ls -la /mount-9p                                                                          │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh       │ functional-836309 ssh sudo umount -f /mount-9p                                                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount     │ -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount3 --alsologtostderr -v=1 │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount     │ -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount1 --alsologtostderr -v=1 │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount     │ -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount2 --alsologtostderr -v=1 │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh       │ functional-836309 ssh findmnt -T /mount1                                                                           │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh       │ functional-836309 ssh findmnt -T /mount1                                                                           │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh       │ functional-836309 ssh findmnt -T /mount2                                                                           │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh       │ functional-836309 ssh findmnt -T /mount3                                                                           │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ mount     │ -p functional-836309 --kill=true                                                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh       │ functional-836309 ssh echo hello                                                                                   │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh       │ functional-836309 ssh cat /etc/hostname                                                                            │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ tunnel    │ functional-836309 tunnel --alsologtostderr                                                                         │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel    │ functional-836309 tunnel --alsologtostderr                                                                         │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ tunnel    │ functional-836309 tunnel --alsologtostderr                                                                         │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ addons    │ functional-836309 addons list                                                                                      │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │ 17 Sep 25 00:17 UTC │
	│ addons    │ functional-836309 addons list -o json                                                                              │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │ 17 Sep 25 00:17 UTC │
	│ start     │ -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ start     │ -p functional-836309 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ start     │ -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-836309 --alsologtostderr -v=1                                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:17 UTC │                     │
	│ service   │ functional-836309 service list                                                                                     │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ service   │ functional-836309 service list -o json                                                                             │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:17:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:17:40.936845  583199 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:17:40.936953  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.936960  583199 out.go:374] Setting ErrFile to fd 2...
	I0917 00:17:40.936966  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.937339  583199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:17:40.937877  583199 out.go:368] Setting JSON to false
	I0917 00:17:40.938867  583199 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10804,"bootTime":1758057457,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:17:40.938993  583199 start.go:140] virtualization: kvm guest
	I0917 00:17:40.941492  583199 out.go:179] * [functional-836309] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0917 00:17:40.944227  583199 notify.go:220] Checking for updates...
	I0917 00:17:40.944335  583199 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:17:40.946765  583199 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:17:40.948295  583199 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:17:40.949696  583199 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:17:40.951158  583199 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:17:40.952856  583199 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:17:40.955046  583199 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:17:40.955588  583199 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:17:40.980713  583199 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:17:40.980830  583199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:17:41.040600  583199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-17 00:17:41.029871976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:17:41.040710  583199 docker.go:318] overlay module found
	I0917 00:17:41.043008  583199 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0917 00:17:41.045273  583199 start.go:304] selected driver: docker
	I0917 00:17:41.045298  583199 start.go:918] validating driver "docker" against &{Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:17:41.045421  583199 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:17:41.048155  583199 out.go:203] 
	W0917 00:17:41.049889  583199 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 00:17:41.051309  583199 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 00:20:26 functional-836309 crio[4225]: time="2025-09-17 00:20:26.538675003Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=63394bbc-1520-4b16-b90c-1b3b4659da0a name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:28 functional-836309 crio[4225]: time="2025-09-17 00:20:28.538688768Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=2fdaaeff-2f16-4883-9ac2-091f1ef4cda6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:28 functional-836309 crio[4225]: time="2025-09-17 00:20:28.538926203Z" level=info msg="Image docker.io/mysql:5.7 not found" id=2fdaaeff-2f16-4883-9ac2-091f1ef4cda6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:37 functional-836309 crio[4225]: time="2025-09-17 00:20:37.538237616Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ee8c8ab3-86f4-49f8-82db-0a4155f203a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:37 functional-836309 crio[4225]: time="2025-09-17 00:20:37.538652845Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=ee8c8ab3-86f4-49f8-82db-0a4155f203a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:42 functional-836309 crio[4225]: time="2025-09-17 00:20:42.988764605Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9472bee5-985d-443e-ab84-4cfef26ed6b3 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:20:42 functional-836309 crio[4225]: time="2025-09-17 00:20:42.989509724Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=8a8022b1-2c9e-4e5e-a9e1-796f46f5546b name=/runtime.v1.ImageService/PullImage
	Sep 17 00:20:43 functional-836309 crio[4225]: time="2025-09-17 00:20:43.000770733Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 17 00:20:43 functional-836309 crio[4225]: time="2025-09-17 00:20:43.538221056Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=58409004-db9b-478c-ad77-31ae60eb243d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:43 functional-836309 crio[4225]: time="2025-09-17 00:20:43.538555240Z" level=info msg="Image docker.io/mysql:5.7 not found" id=58409004-db9b-478c-ad77-31ae60eb243d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:55 functional-836309 crio[4225]: time="2025-09-17 00:20:55.538248828Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f9be5f40-a44a-40f4-9e64-879a998c66d6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:55 functional-836309 crio[4225]: time="2025-09-17 00:20:55.538281302Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=c126bcc2-5162-4813-861b-18f122f571f8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:55 functional-836309 crio[4225]: time="2025-09-17 00:20:55.538596071Z" level=info msg="Image docker.io/mysql:5.7 not found" id=c126bcc2-5162-4813-861b-18f122f571f8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:20:55 functional-836309 crio[4225]: time="2025-09-17 00:20:55.538618844Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f9be5f40-a44a-40f4-9e64-879a998c66d6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:07 functional-836309 crio[4225]: time="2025-09-17 00:21:07.538585839Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=3578d4ee-1c46-4e11-b3b8-280340156860 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:07 functional-836309 crio[4225]: time="2025-09-17 00:21:07.538900752Z" level=info msg="Image docker.io/mysql:5.7 not found" id=3578d4ee-1c46-4e11-b3b8-280340156860 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:09 functional-836309 crio[4225]: time="2025-09-17 00:21:09.538184253Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b4871fec-64d9-4313-b40c-a365897d3c4d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:09 functional-836309 crio[4225]: time="2025-09-17 00:21:09.538538646Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=b4871fec-64d9-4313-b40c-a365897d3c4d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:13 functional-836309 crio[4225]: time="2025-09-17 00:21:13.101722275Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b7bb07f5-2e2a-4f71-a40a-0d8927c63c0a name=/runtime.v1.ImageService/PullImage
	Sep 17 00:21:13 functional-836309 crio[4225]: time="2025-09-17 00:21:13.102558653Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=81fa3a85-3ed4-4f28-aed7-74bc8f5da9d5 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:21:13 functional-836309 crio[4225]: time="2025-09-17 00:21:13.110680817Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 17 00:21:19 functional-836309 crio[4225]: time="2025-09-17 00:21:19.538750885Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=830509ed-3f52-4f9d-a18a-cf2e1cffedc4 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:19 functional-836309 crio[4225]: time="2025-09-17 00:21:19.539028340Z" level=info msg="Image docker.io/mysql:5.7 not found" id=830509ed-3f52-4f9d-a18a-cf2e1cffedc4 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:28 functional-836309 crio[4225]: time="2025-09-17 00:21:28.538530569Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7e8edbf8-c741-4ad7-b804-b2509340a3b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:21:28 functional-836309 crio[4225]: time="2025-09-17 00:21:28.538833678Z" level=info msg="Image docker.io/nginx:alpine not found" id=7e8edbf8-c741-4ad7-b804-b2509340a3b8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb474edf243b1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   d689b11bc9243       busybox-mount
	9f2aad7cc830a       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   cb31a6d151f18       kube-apiserver-functional-836309
	8fc6aae6af439       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   2                   073e9000e2cbd       kube-controller-manager-functional-836309
	a14ceabc188eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   bd997b17bb8d3       etcd-functional-836309
	888d62ee0b634       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Exited              kube-controller-manager   1                   073e9000e2cbd       kube-controller-manager-functional-836309
	c06f60831d1a2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                1                   04529c3273474       kube-proxy-cbvjf
	64858777ddc03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      10 minutes ago      Running             kindnet-cni               1                   e619e5a0562ff       kindnet-h2rjf
	8414e6a217a0a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            1                   c5ca55e367f9f       kube-scheduler-functional-836309
	8750ce41941ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       1                   9bd06274bf9f1       storage-provisioner
	9d874bdc79320       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   1                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	43960daf0ceb5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   4111a7c1816a0       coredns-66bc5c9577-zvmqf
	fee9c2e341d4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       0                   9bd06274bf9f1       storage-provisioner
	94e0331fcf046       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               0                   e619e5a0562ff       kindnet-h2rjf
	2590bb5313e64       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      11 minutes ago      Exited              kube-proxy                0                   04529c3273474       kube-proxy-cbvjf
	fd4423f996e17       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      11 minutes ago      Exited              kube-scheduler            0                   c5ca55e367f9f       kube-scheduler-functional-836309
	66e1997c75a09       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      0                   bd997b17bb8d3       etcd-functional-836309
	
	
	==> coredns [43960daf0ceb508755bb95ca37b4c30a5d31d7bdbf6bef6d16e3dbefa1056330] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58276 - 22452 "HINFO IN 7807615287491316741.4205491171577213210. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036670075s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d874bdc7932076f658b9567185beccffdb2e85d489d293dfe85e3e619013c1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34900 - 1175 "HINFO IN 6559932629016620651.4444246566734803126. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054012876s
	
	
	==> describe nodes <==
	Name:               functional-836309
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-836309
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=functional-836309
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_09_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:09:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-836309
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:21:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:21:15 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:21:15 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:21:15 +0000   Wed, 17 Sep 2025 00:09:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:21:15 +0000   Wed, 17 Sep 2025 00:10:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-836309
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 67f7de0bcecd43499ea9b16c8c00a864
	  System UUID:                e097105d-a213-4ebf-95fe-cce4cad422c0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-m76kz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-54xkq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  default                     mysql-5bb876957f-l9pq7                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-zvmqf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-836309                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-h2rjf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-836309              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-836309     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-cbvjf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-836309              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-htbkl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lm4gk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-836309 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-836309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-836309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-836309 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-836309 event: Registered Node functional-836309 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [66e1997c75a09719465fdda73ab2f14bd72552ff33212c4d720f74944117320d] <==
	{"level":"warn","ts":"2025-09-17T00:09:55.212910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.220259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.227159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.234529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.243853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.251054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:55.257902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:10:42.783237Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:10:42.783351Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-17T00:10:42.783494Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:49.785250Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785304Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785881Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785904Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785429Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:10:49.785929Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:49.785947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.785969Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.785982Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:49.788632Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-17T00:10:49.788702Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:49.788727Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-17T00:10:49.788733Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-836309","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [a14ceabc188ebbf10535dda7c1f798592d2e79e03743ad28e2bd444ce75333ba] <==
	{"level":"warn","ts":"2025-09-17T00:11:02.777282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.783883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.791702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.799199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.806444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.812694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.819824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.828034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.834969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.841980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.849538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.863753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.870164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.878044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.884349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.890622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.898140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.905536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.912507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.926102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.939007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:11:02.982536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:21:02.483178Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1020}
	{"level":"info","ts":"2025-09-17T00:21:02.502430Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1020,"took":"18.848289ms","hash":1664117828,"current-db-size-bytes":3403776,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-09-17T00:21:02.502483Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1664117828,"revision":1020,"compact-revision":-1}
	
	
	==> kernel <==
	 00:21:29 up  3:03,  0 users,  load average: 0.12, 0.39, 9.08
	Linux functional-836309 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [64858777ddc0357994b52a6fd8bf79dba5ac39143453505e0f08e2a242aecae8] <==
	I0917 00:19:23.717297       1 main.go:301] handling current node
	I0917 00:19:33.723554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:33.723591       1 main.go:301] handling current node
	I0917 00:19:43.716760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:43.716819       1 main.go:301] handling current node
	I0917 00:19:53.725103       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:19:53.725140       1 main.go:301] handling current node
	I0917 00:20:03.717279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:03.717315       1 main.go:301] handling current node
	I0917 00:20:13.716360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:13.716417       1 main.go:301] handling current node
	I0917 00:20:23.721847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:23.721888       1 main.go:301] handling current node
	I0917 00:20:33.716242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:33.716298       1 main.go:301] handling current node
	I0917 00:20:43.716421       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:43.716476       1 main.go:301] handling current node
	I0917 00:20:53.725170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:20:53.725212       1 main.go:301] handling current node
	I0917 00:21:03.717539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:03.717585       1 main.go:301] handling current node
	I0917 00:21:13.716959       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:13.717006       1 main.go:301] handling current node
	I0917 00:21:23.717031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:21:23.717085       1 main.go:301] handling current node
	
	
	==> kindnet [94e0331fcf046a39dfa4b150cab0807b41735b3149fccd0d7298c096121f3177] <==
	I0917 00:10:04.407562       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0917 00:10:04.407829       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0917 00:10:04.407974       1 main.go:148] setting mtu 1500 for CNI 
	I0917 00:10:04.407992       1 main.go:178] kindnetd IP family: "ipv4"
	I0917 00:10:04.408041       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-17T00:10:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0917 00:10:04.608241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0917 00:10:04.608325       1 controller.go:381] "Waiting for informer caches to sync"
	I0917 00:10:04.608338       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0917 00:10:04.608850       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0917 00:10:05.008798       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0917 00:10:05.008823       1 metrics.go:72] Registering metrics
	I0917 00:10:05.008870       1 controller.go:711] "Syncing nftables rules"
	I0917 00:10:14.613627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:14.613697       1 main.go:301] handling current node
	I0917 00:10:24.615570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:24.615608       1 main.go:301] handling current node
	I0917 00:10:34.612524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:10:34.612559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f2aad7cc830a3ec57ba1b3d2cd335c4f402ff995fba44cd8dd9944ea36855bb] <==
	I0917 00:11:07.148642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:11:20.918862       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.144.51"}
	I0917 00:11:25.122372       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.76.119"}
	I0917 00:11:27.305503       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.76.206"}
	I0917 00:12:08.543295       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.9.127"}
	I0917 00:12:16.483422       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:26.931027       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:26.407498       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:39.931798       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:38.574091       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:09.009462       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:47.255026       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:24.666189       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:12.893722       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:41.988170       1 controller.go:667] quota admission added evaluator for: namespaces
	I0917 00:17:42.111550       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.15.120"}
	I0917 00:17:42.123959       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.205.187"}
	I0917 00:17:49.333363       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:17:56.710543       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.163.232"}
	I0917 00:18:40.466273       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:19:10.646833       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:20:06.755790       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:20:15.654078       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:21:03.383145       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:21:15.681088       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [888d62ee0b634c673d1878ce150c6f0034e298592a41de5b4a133d003db1a139] <==
	I0917 00:10:43.989698       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:44.308654       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0917 00:10:44.308686       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:44.310251       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 00:10:44.310301       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0917 00:10:44.310653       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0917 00:10:44.310800       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 00:10:56.321004       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [8fc6aae6af439080e3411b9cb8143eddc1da6c5a6e3211c2a191a3dbfa865ca9] <==
	I0917 00:11:06.793750       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:11:06.793797       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0917 00:11:06.795031       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0917 00:11:06.795086       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:11:06.795122       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:11:06.795131       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:11:06.795137       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:11:06.795175       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0917 00:11:06.795208       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:11:06.797152       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:11:06.798500       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:11:06.800827       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:11:06.800851       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:11:06.800859       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:11:06.800834       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:11:06.803222       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:11:06.805177       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:11:06.807633       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:11:06.816406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:17:42.036751       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.041055       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.045573       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.045609       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.049857       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:17:42.055310       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2590bb5313e648a1d5258fd84180691999e1fa74ac7e4a9bad97c4eaec4d2485] <==
	I0917 00:10:04.193311       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:04.263769       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:04.364709       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:04.364767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:04.364855       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:04.385096       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:04.385159       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:04.390876       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:04.391486       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:04.391511       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:04.393121       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:04.393158       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:04.393167       1 config.go:200] "Starting service config controller"
	I0917 00:10:04.393187       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:04.393201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:04.393189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:04.393246       1 config.go:309] "Starting node config controller"
	I0917 00:10:04.393260       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:04.493462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:10:04.493428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:04.493439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c06f60831d1a27beead1133ee09bd56597eea7ed1a44bd377eb0a2445447cee8] <==
	I0917 00:10:43.389590       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:10:43.460712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:43.561820       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:43.561866       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:10:43.561957       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:43.585276       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:10:43.585350       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:43.590785       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:43.591164       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:43.591200       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:43.593011       1 config.go:200] "Starting service config controller"
	I0917 00:10:43.593356       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:43.593113       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:43.593126       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:43.593435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:43.593437       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:43.593165       1 config.go:309] "Starting node config controller"
	I0917 00:10:43.593494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:43.593503       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:43.693526       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:10:43.693578       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:10:43.693636       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8414e6a217a0a65711aa4a8781ace6ed51c30407bf0166b9c4024dad4b506e9c] <==
	I0917 00:10:44.134044       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:51.491622       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:10:51.491651       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:51.496210       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496222       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:10:51.496254       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:51.496251       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.496256       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.496635       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:10:51.496706       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:10:51.596824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:51.597020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:10:51.597094       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:11:03.387571       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:11:03.387692       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:11:03.387722       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:11:03.387745       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:11:03.387764       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:11:03.387800       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	
	
	==> kube-scheduler [fd4423f996e172ec520acd90ab88ecb92a9bfa721cc812a9d73b36f24a393306] <==
	E0917 00:09:56.365834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:09:56.365882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:09:56.366012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:09:56.366067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:09:56.366104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0917 00:09:56.366185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:09:56.366277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:09:56.366176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0917 00:09:56.366533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:09:56.366612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:09:56.366642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:09:56.366681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:09:56.366732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:09:56.366735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0917 00:09:56.366804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:09:56.366825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0917 00:09:56.366896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0917 00:09:56.366939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I0917 00:09:57.962884       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.641974       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:10:42.642087       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:42.642285       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:10:42.642311       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:10:42.642328       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:10:42.642359       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:20:51 functional-836309 kubelet[5462]: E0917 00:20:51.642930    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068451642620296  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:20:55 functional-836309 kubelet[5462]: E0917 00:20:55.538933    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:20:55 functional-836309 kubelet[5462]: E0917 00:20:55.538933    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lm4gk" podUID="3f7e653f-cd38-4dd9-8d08-5632496af8f8"
	Sep 17 00:20:58 functional-836309 kubelet[5462]: E0917 00:20:58.537708    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-54xkq" podUID="2d5c821a-47c0-4488-b33d-e43b5a07a2f0"
	Sep 17 00:21:01 functional-836309 kubelet[5462]: E0917 00:21:01.644200    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068461643900483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:21:01 functional-836309 kubelet[5462]: E0917 00:21:01.644233    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068461643900483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:21:04 functional-836309 kubelet[5462]: E0917 00:21:04.538033    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:21:07 functional-836309 kubelet[5462]: E0917 00:21:07.539218    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:21:11 functional-836309 kubelet[5462]: E0917 00:21:11.646114    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068471645880795  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:21:11 functional-836309 kubelet[5462]: E0917 00:21:11.646152    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068471645880795  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.101225    5462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.101287    5462 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.101555    5462 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(54252b1b-51bf-4359-848b-6b08a8f68dcd): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.101615    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.102102    5462 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.102147    5462 kuberuntime_image.go:43] "Failed to pull image" err="short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" image="kicbase/echo-server:latest"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.102346    5462 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-m76kz_default(de55227f-8aa8-49c2-b1dc-b0517b716b2d): ErrImagePull: short-name \"kicbase/echo-server:latest\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\"" logger="UnhandledError"
	Sep 17 00:21:13 functional-836309 kubelet[5462]: E0917 00:21:13.102906    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:21:15 functional-836309 kubelet[5462]: E0917 00:21:15.538201    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	Sep 17 00:21:19 functional-836309 kubelet[5462]: E0917 00:21:19.539409    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-l9pq7" podUID="a1c1727d-2e60-4a98-8ae8-aa7319d47aed"
	Sep 17 00:21:21 functional-836309 kubelet[5462]: E0917 00:21:21.648225    5462 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068481647939930  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:21:21 functional-836309 kubelet[5462]: E0917 00:21:21.648266    5462 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068481647939930  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175987}  inodes_used:{value:88}}"
	Sep 17 00:21:26 functional-836309 kubelet[5462]: E0917 00:21:26.538100    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-m76kz" podUID="de55227f-8aa8-49c2-b1dc-b0517b716b2d"
	Sep 17 00:21:28 functional-836309 kubelet[5462]: E0917 00:21:28.539221    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="54252b1b-51bf-4359-848b-6b08a8f68dcd"
	Sep 17 00:21:29 functional-836309 kubelet[5462]: E0917 00:21:29.538148    5462 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="0f84d084-6e2e-4197-b486-4ba402096a6c"
	
	
	==> storage-provisioner [8750ce41941ba15a9b4b2e19cfe5128979331c1400a49209e1f4efb5b1318340] <==
	W0917 00:21:04.040385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:06.043808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:06.048171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:08.051961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:08.057944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:10.061580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:10.067467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:12.070852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:12.075580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:14.079030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:14.083052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:16.086759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:16.091618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:18.094970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:18.098915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:20.102145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:20.106293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:22.109794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:22.116161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:24.119753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:24.124732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:26.128088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:26.133716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:28.137214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:21:28.142503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fee9c2e341d4f1fd20c4ea1c22db8cd7eca409574ec8835d434658453643976f] <==
	W0917 00:10:17.462196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.465941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:19.471590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.475172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:21.479508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.483478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:23.491192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.495638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:25.501797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.506026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:27.512276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.515329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:29.519407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.522663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:31.529122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.532130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:33.536263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.539874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:35.544694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.549064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:37.553478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.557571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:39.563110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.566878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:41.571434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
helpers_test.go:269: (dbg) Run:  kubectl --context functional-836309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk: exit status 1 (123.194607ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://cb474edf243b1a8e4e93b368e7e6be5f76c0c8b839e74e1c49c1a7bff20a0680
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Sep 2025 00:12:00 +0000
	      Finished:     Wed, 17 Sep 2025 00:12:00 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvp4d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zvp4d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m58s  default-scheduler  Successfully assigned default/busybox-mount to functional-836309
	  Normal  Pulling    9m59s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.264s (28.084s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m30s  kubelet            Created container: mount-munger
	  Normal  Started    9m30s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-m76kz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:25 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4fhc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c4fhc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-m76kz to functional-836309
	  Normal   Pulling    4m59s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4m29s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     4m29s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x24 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x24 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-54xkq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:17:56 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ldx8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9ldx8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m33s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-54xkq to functional-836309
	  Warning  Failed     48s (x2 over 2m48s)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     48s (x2 over 2m48s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    32s (x2 over 2m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     32s (x2 over 2m48s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    18s (x3 over 3m34s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-l9pq7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:27 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-76bnk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-76bnk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-l9pq7 to functional-836309
	  Normal   Pulling    3m33s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m18s (x5 over 9m33s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m18s (x5 over 9m33s)  kubelet            Error: ErrImagePull
	  Warning  Failed     73s (x16 over 9m32s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    11s (x21 over 9m32s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:12:08 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2v8fx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2v8fx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m21s                  default-scheduler  Successfully assigned default/nginx-svc to functional-836309
	  Normal   Pulling    2m30s (x5 over 9m22s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     17s (x5 over 8m30s)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     17s (x5 over 8m30s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x12 over 8m30s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2s (x12 over 8m30s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-836309/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:11:37 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85lfd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-85lfd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m52s                 default-scheduler  Successfully assigned default/sp-pod to functional-836309
	  Normal   Pulling    3m5s (x5 over 9m53s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     108s (x5 over 9m)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x5 over 9m)     kubelet            Error: ErrImagePull
	  Warning  Failed     50s (x16 over 9m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x20 over 9m)      kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-htbkl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lm4gk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-836309 describe pod busybox-mount hello-node-75c85bcc94-m76kz hello-node-connect-7d85dfc575-54xkq mysql-5bb876957f-l9pq7 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-htbkl kubernetes-dashboard-855c9754f9-lm4gk: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (603.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-836309 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-836309 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-m76kz" [de55227f-8aa8-49c2-b1dc-b0517b716b2d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-17 00:21:25.454242731 +0000 UTC m=+1992.921586339
functional_test.go:1460: (dbg) Run:  kubectl --context functional-836309 describe po hello-node-75c85bcc94-m76kz -n default
functional_test.go:1460: (dbg) kubectl --context functional-836309 describe po hello-node-75c85bcc94-m76kz -n default:
Name:             hello-node-75c85bcc94-m76kz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-836309/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:11:25 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4fhc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c4fhc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-m76kz to functional-836309
Normal   Pulling    4m54s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     4m24s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     4m24s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     3m21s (x16 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m22s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-836309 logs hello-node-75c85bcc94-m76kz -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-836309 logs hello-node-75c85bcc94-m76kz -n default: exit status 1 (68.557185ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-m76kz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-836309 logs hello-node-75c85bcc94-m76kz -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-836309 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [54252b1b-51bf-4359-848b-6b08a8f68dcd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0917 00:12:58.301279  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:15:14.436303  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:15:42.143609  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-836309 -n functional-836309
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-17 00:16:08.872599292 +0000 UTC m=+1676.339942873
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-836309 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-836309 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-836309/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:12:08 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2v8fx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2v8fx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  4m                  default-scheduler  Successfully assigned default/nginx-svc to functional-836309
Normal   BackOff    83s (x2 over 3m8s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     83s (x2 over 3m8s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    68s (x3 over 4m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     8s (x3 over 3m8s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     8s (x3 over 3m8s)   kubelet            Error: ErrImagePull
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-836309 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-836309 logs nginx-svc -n default: exit status 1 (73.374238ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-836309 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0917 00:16:09.015874  521273 retry.go:31] will retry after 1.706774896s: Temporary Error: Get "http:": http: no Host in request URL
I0917 00:16:10.723117  521273 retry.go:31] will retry after 5.453501864s: Temporary Error: Get "http:": http: no Host in request URL
I0917 00:16:16.177492  521273 retry.go:31] will retry after 9.78055404s: Temporary Error: Get "http:": http: no Host in request URL
I0917 00:16:25.958368  521273 retry.go:31] will retry after 8.944571017s: Temporary Error: Get "http:": http: no Host in request URL
I0917 00:16:34.903759  521273 retry.go:31] will retry after 19.475165104s: Temporary Error: Get "http:": http: no Host in request URL
I0917 00:16:54.379466  521273 retry.go:31] will retry after 13.237529346s: Temporary Error: Get "http:": http: no Host in request URL
I0917 00:17:07.617554  521273 retry.go:31] will retry after 48.782998558s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-836309 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.97.9.127   10.97.9.127   80:32127/TCP   5m48s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 service --namespace=default --https --url hello-node: exit status 115 (553.369374ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31966
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_8.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-836309 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 service hello-node --url --format={{.IP}}: exit status 115 (582.38584ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-836309 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 service hello-node --url: exit status 115 (568.715157ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31966
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-836309 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31966
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 node add --alsologtostderr -v 5
E0917 00:30:14.436293  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 node add --alsologtostderr -v 5: exit status 80 (28.813618334s)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster ha-671025 as [worker]
	* Starting "ha-671025-m04" worker node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...
	* Stopping node "ha-671025-m04"  ...
	* Deleting "ha-671025-m04" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:30:05.123199  601803 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:30:05.124194  601803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:30:05.124209  601803 out.go:374] Setting ErrFile to fd 2...
	I0917 00:30:05.124215  601803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:30:05.124478  601803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:30:05.124817  601803 mustload.go:65] Loading cluster: ha-671025
	I0917 00:30:05.125841  601803 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:30:05.126866  601803 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:30:05.145353  601803 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:30:05.145689  601803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:30:05.202594  601803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:30:05.191613086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:30:05.202939  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:30:05.222456  601803 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:30:05.223072  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:30:05.241930  601803 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:30:05.242224  601803 api_server.go:166] Checking apiserver status ...
	I0917 00:30:05.242283  601803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:30:05.242401  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:30:05.261078  601803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:30:05.363011  601803 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:30:05.373448  601803 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:30:05.373522  601803 ssh_runner.go:195] Run: ls
	I0917 00:30:05.377834  601803 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:30:05.382218  601803 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:30:05.384080  601803 out.go:179] * Adding node m04 to cluster ha-671025 as [worker]
	I0917 00:30:05.385462  601803 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:30:05.385627  601803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:30:05.387268  601803 out.go:179] * Starting "ha-671025-m04" worker node in "ha-671025" cluster
	I0917 00:30:05.388349  601803 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:30:05.389552  601803 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:30:05.390547  601803 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:30:05.390586  601803 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:30:05.390596  601803 cache.go:58] Caching tarball of preloaded images
	I0917 00:30:05.390626  601803 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:30:05.390686  601803 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:30:05.390697  601803 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:30:05.390798  601803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:30:05.412431  601803 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:30:05.412457  601803 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:30:05.412480  601803 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:30:05.412515  601803 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:30:05.412640  601803 start.go:364] duration metric: took 97.941µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:30:05.412673  601803 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0917 00:30:05.412830  601803 start.go:125] createHost starting for "m04" (driver="docker")
	I0917 00:30:05.414907  601803 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:30:05.415020  601803 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:30:05.415046  601803 client.go:168] LocalClient.Create starting
	I0917 00:30:05.415132  601803 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:30:05.415165  601803 main.go:141] libmachine: Decoding PEM data...
	I0917 00:30:05.415179  601803 main.go:141] libmachine: Parsing certificate...
	I0917 00:30:05.415254  601803 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:30:05.415283  601803 main.go:141] libmachine: Decoding PEM data...
	I0917 00:30:05.415296  601803 main.go:141] libmachine: Parsing certificate...
	I0917 00:30:05.415534  601803 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:30:05.433153  601803 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc0008b4c90 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:30:05.433197  601803 kic.go:121] calculated static IP "192.168.49.5" for the "ha-671025-m04" container
	I0917 00:30:05.433271  601803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:30:05.451285  601803 cli_runner.go:164] Run: docker volume create ha-671025-m04 --label name.minikube.sigs.k8s.io=ha-671025-m04 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:30:05.471210  601803 oci.go:103] Successfully created a docker volume ha-671025-m04
	I0917 00:30:05.471292  601803 cli_runner.go:164] Run: docker run --rm --name ha-671025-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m04 --entrypoint /usr/bin/test -v ha-671025-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:30:05.876378  601803 oci.go:107] Successfully prepared a docker volume ha-671025-m04
	I0917 00:30:05.876472  601803 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:30:05.876500  601803 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:30:05.876581  601803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:30:10.257836  601803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.381171258s)
	I0917 00:30:10.257875  601803 kic.go:203] duration metric: took 4.381372309s to extract preloaded images to volume ...
	W0917 00:30:10.257972  601803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:30:10.258004  601803 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:30:10.258041  601803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:30:10.321005  601803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m04 --name ha-671025-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m04 --network ha-671025 --ip 192.168.49.5 --volume ha-671025-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:30:10.615236  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Running}}
	I0917 00:30:10.634825  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:10.653971  601803 cli_runner.go:164] Run: docker exec ha-671025-m04 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:30:10.705337  601803 oci.go:144] the created container "ha-671025-m04" has a running status.
	I0917 00:30:10.705369  601803 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa...
	I0917 00:30:11.381957  601803 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:30:11.382012  601803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:30:11.415008  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:11.434806  601803 cli_runner.go:164] Run: docker inspect ha-671025-m04
	I0917 00:30:11.453607  601803 errors.go:84] Postmortem inspect ("docker inspect ha-671025-m04"): -- stdout --
	[
	    {
	        "Id": "d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df",
	        "Created": "2025-09-17T00:30:10.338289931Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:30:10.377865799Z",
	            "FinishedAt": "2025-09-17T00:30:10.75893251Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df/hosts",
	        "LogPath": "/var/lib/docker/containers/d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df/d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df-json.log",
	        "Name": "/ha-671025-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-671025-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df",
	                "LowerDir": "/var/lib/docker/overlay2/e31652efb56da48a1eea7020c5d141aa31911097a9ce25ddf4ecd761ef8e1ece-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e31652efb56da48a1eea7020c5d141aa31911097a9ce25ddf4ecd761ef8e1ece/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e31652efb56da48a1eea7020c5d141aa31911097a9ce25ddf4ecd761ef8e1ece/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e31652efb56da48a1eea7020c5d141aa31911097a9ce25ddf4ecd761ef8e1ece/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-671025-m04",
	                "Source": "/var/lib/docker/volumes/ha-671025-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025-m04",
	                "name.minikube.sigs.k8s.io": "ha-671025-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025-m04",
	                        "d8a31131303e"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0917 00:30:11.453687  601803 cli_runner.go:164] Run: docker logs --timestamps --details ha-671025-m04
	I0917 00:30:11.474237  601803 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-671025-m04"): -- stdout --
	2025-09-17T00:30:10.608921316Z  + userns=
	2025-09-17T00:30:10.609572305Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-17T00:30:10.610794279Z  + validate_userns
	2025-09-17T00:30:10.610826053Z  + [[ -z '' ]]
	2025-09-17T00:30:10.610836221Z  + return
	2025-09-17T00:30:10.610840300Z  + configure_containerd
	2025-09-17T00:30:10.610887173Z  + local snapshotter=
	2025-09-17T00:30:10.610910331Z  + [[ -n '' ]]
	2025-09-17T00:30:10.610918976Z  + [[ -z '' ]]
	2025-09-17T00:30:10.611502053Z  ++ stat -f -c %T /kind
	2025-09-17T00:30:10.613020621Z  + container_filesystem=overlayfs
	2025-09-17T00:30:10.613039507Z  + [[ overlayfs == \z\f\s ]]
	2025-09-17T00:30:10.613043755Z  + [[ -n '' ]]
	2025-09-17T00:30:10.613094579Z  + configure_proxy
	2025-09-17T00:30:10.613105249Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-17T00:30:10.617323418Z  + [[ ! -z '' ]]
	2025-09-17T00:30:10.617341941Z  + cat
	2025-09-17T00:30:10.618546046Z  + fix_mount
	2025-09-17T00:30:10.618563361Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-17T00:30:10.618567105Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-17T00:30:10.618750047Z  ++ which mount
	2025-09-17T00:30:10.620225285Z  ++ which umount
	2025-09-17T00:30:10.621128173Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-17T00:30:10.627152151Z  ++ which mount
	2025-09-17T00:30:10.628793840Z  ++ which umount
	2025-09-17T00:30:10.629711979Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-17T00:30:10.631546881Z  +++ which mount
	2025-09-17T00:30:10.632749361Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-17T00:30:10.633701797Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-17T00:30:10.633717187Z  + echo 'INFO: remounting /sys read-only'
	2025-09-17T00:30:10.633720730Z  INFO: remounting /sys read-only
	2025-09-17T00:30:10.633723770Z  + mount -o remount,ro /sys
	2025-09-17T00:30:10.635663426Z  + echo 'INFO: making mounts shared'
	2025-09-17T00:30:10.635682617Z  INFO: making mounts shared
	2025-09-17T00:30:10.635686363Z  + mount --make-rshared /
	2025-09-17T00:30:10.637569587Z  + retryable_fix_cgroup
	2025-09-17T00:30:10.637924070Z  ++ seq 0 10
	2025-09-17T00:30:10.638760456Z  + for i in $(seq 0 10)
	2025-09-17T00:30:10.638770113Z  + fix_cgroup
	2025-09-17T00:30:10.638819566Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-17T00:30:10.638857101Z  + echo 'INFO: detected cgroup v2'
	2025-09-17T00:30:10.638861296Z  INFO: detected cgroup v2
	2025-09-17T00:30:10.638905448Z  + return
	2025-09-17T00:30:10.638918134Z  + return
	2025-09-17T00:30:10.638941969Z  + fix_machine_id
	2025-09-17T00:30:10.638956313Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-17T00:30:10.638959620Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-17T00:30:10.638972632Z  + rm -f /etc/machine-id
	2025-09-17T00:30:10.640132953Z  + systemd-machine-id-setup
	2025-09-17T00:30:10.644084502Z  Initializing machine ID from random generator.
	2025-09-17T00:30:10.646483723Z  + fix_product_name
	2025-09-17T00:30:10.646501067Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-17T00:30:10.646504855Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-17T00:30:10.646508356Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-17T00:30:10.646511448Z  + echo kind
	2025-09-17T00:30:10.648535554Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-17T00:30:10.650045952Z  + fix_product_uuid
	2025-09-17T00:30:10.650062850Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-17T00:30:10.650066048Z  + cat /proc/sys/kernel/random/uuid
	2025-09-17T00:30:10.651194104Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-17T00:30:10.651210498Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-17T00:30:10.651214062Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-17T00:30:10.651243611Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-17T00:30:10.652992697Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-17T00:30:10.653038127Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-17T00:30:10.653046673Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-17T00:30:10.653049817Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-17T00:30:10.654483726Z  + select_iptables
	2025-09-17T00:30:10.654501242Z  + local mode num_legacy_lines num_nft_lines
	2025-09-17T00:30:10.655526996Z  ++ grep -c '^-'
	2025-09-17T00:30:10.658771404Z  ++ true
	2025-09-17T00:30:10.658951213Z  + num_legacy_lines=0
	2025-09-17T00:30:10.660284851Z  ++ grep -c '^-'
	2025-09-17T00:30:10.666176694Z  + num_nft_lines=6
	2025-09-17T00:30:10.666203233Z  + '[' 0 -ge 6 ']'
	2025-09-17T00:30:10.666207603Z  + mode=nft
	2025-09-17T00:30:10.666210580Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-17T00:30:10.666310783Z  INFO: setting iptables to detected mode: nft
	2025-09-17T00:30:10.666323715Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:30:10.666338850Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:30:10.666341220Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:30:10.666739065Z  ++ seq 0 15
	2025-09-17T00:30:10.667529147Z  + for i in $(seq 0 15)
	2025-09-17T00:30:10.667543632Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:30:10.670835439Z  + return
	2025-09-17T00:30:10.670848366Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:30:10.670948594Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:30:10.670952781Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:30:10.671354143Z  ++ seq 0 15
	2025-09-17T00:30:10.672197003Z  + for i in $(seq 0 15)
	2025-09-17T00:30:10.672213261Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:30:10.674985518Z  + return
	2025-09-17T00:30:10.675101203Z  + enable_network_magic
	2025-09-17T00:30:10.675120003Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-17T00:30:10.675123845Z  + local docker_host_ip
	2025-09-17T00:30:10.676279478Z  ++ cut '-d ' -f1
	2025-09-17T00:30:10.676514530Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:30:10.676619781Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-17T00:30:10.714376891Z  + docker_host_ip=
	2025-09-17T00:30:10.714417592Z  + [[ -z '' ]]
	2025-09-17T00:30:10.715085211Z  ++ ip -4 route show default
	2025-09-17T00:30:10.715256956Z  ++ cut '-d ' -f3
	2025-09-17T00:30:10.717300557Z  + docker_host_ip=192.168.49.1
	2025-09-17T00:30:10.717646998Z  + iptables-save
	2025-09-17T00:30:10.718155376Z  + iptables-restore
	2025-09-17T00:30:10.721289190Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-17T00:30:10.732998725Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-17T00:30:10.735078705Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-17T00:30:10.736338072Z  + replaced='# Generated by Docker Engine.
	2025-09-17T00:30:10.736351188Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:30:10.736354355Z  # has been modified.
	2025-09-17T00:30:10.736357293Z  
	2025-09-17T00:30:10.736359800Z  nameserver 192.168.49.1
	2025-09-17T00:30:10.736362800Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:30:10.736365944Z  options edns0 trust-ad ndots:0
	2025-09-17T00:30:10.736381961Z  
	2025-09-17T00:30:10.736384951Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:30:10.736409663Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:30:10.736414887Z  # Overrides: []
	2025-09-17T00:30:10.736417561Z  # Option ndots from: internal'
	2025-09-17T00:30:10.736420061Z  + [[ '' == '' ]]
	2025-09-17T00:30:10.736422957Z  + echo '# Generated by Docker Engine.
	2025-09-17T00:30:10.736425724Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:30:10.736428153Z  # has been modified.
	2025-09-17T00:30:10.736430768Z  
	2025-09-17T00:30:10.736433130Z  nameserver 192.168.49.1
	2025-09-17T00:30:10.736435845Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:30:10.736439083Z  options edns0 trust-ad ndots:0
	2025-09-17T00:30:10.736441893Z  
	2025-09-17T00:30:10.736444350Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:30:10.736447601Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:30:10.736450423Z  # Overrides: []
	2025-09-17T00:30:10.736453416Z  # Option ndots from: internal'
	2025-09-17T00:30:10.736597437Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-17T00:30:10.736605895Z  + local files_to_update
	2025-09-17T00:30:10.736609001Z  + local should_fix_certificate=false
	2025-09-17T00:30:10.737724887Z  ++ cut '-d ' -f1
	2025-09-17T00:30:10.737894657Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:30:10.738399111Z  ++++ hostname
	2025-09-17T00:30:10.739132697Z  +++ timeout 5 getent ahostsv4 ha-671025-m04
	2025-09-17T00:30:10.741921659Z  + curr_ipv4=192.168.49.5
	2025-09-17T00:30:10.741934720Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-17T00:30:10.741937248Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-17T00:30:10.741939154Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-17T00:30:10.741990397Z  + [[ -n 192.168.49.5 ]]
	2025-09-17T00:30:10.742001184Z  + echo -n 192.168.49.5
	2025-09-17T00:30:10.743212532Z  ++ cut '-d ' -f1
	2025-09-17T00:30:10.743278070Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:30:10.743866613Z  ++++ hostname
	2025-09-17T00:30:10.744639239Z  +++ timeout 5 getent ahostsv6 ha-671025-m04
	2025-09-17T00:30:10.746974739Z  + curr_ipv6=
	2025-09-17T00:30:10.746987782Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-17T00:30:10.747002591Z  INFO: Detected IPv6 address: 
	2025-09-17T00:30:10.747005794Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-17T00:30:10.747073575Z  + [[ -n '' ]]
	2025-09-17T00:30:10.747085342Z  + false
	2025-09-17T00:30:10.747627679Z  ++ uname -a
	2025-09-17T00:30:10.748465614Z  + echo 'entrypoint completed: Linux ha-671025-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-17T00:30:10.748480389Z  entrypoint completed: Linux ha-671025-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-17T00:30:10.748484631Z  + exec /sbin/init
	2025-09-17T00:30:10.755276058Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-17T00:30:10.755306952Z  Detected virtualization docker.
	2025-09-17T00:30:10.755310666Z  Detected architecture x86-64.
	2025-09-17T00:30:10.755415970Z  
	2025-09-17T00:30:10.755433601Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-17T00:30:10.755437818Z  
	2025-09-17T00:30:10.755906368Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:10.755914573Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:10.755917845Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:10.755921071Z  Exiting PID 1...
	
	-- /stdout --
	I0917 00:30:11.474351  601803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:30:11.531217  601803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:30:11.52150239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:30:11.531326  601803 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:30:11.52150239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Ar
chitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false
Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:30:11.531416  601803 network_create.go:284] running [docker network inspect ha-671025-m04] to gather additional debugging logs...
	I0917 00:30:11.531439  601803 cli_runner.go:164] Run: docker network inspect ha-671025-m04
	W0917 00:30:11.550166  601803 cli_runner.go:211] docker network inspect ha-671025-m04 returned with exit code 1
	I0917 00:30:11.550210  601803 network_create.go:287] error running [docker network inspect ha-671025-m04]: docker network inspect ha-671025-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-671025-m04 not found
	I0917 00:30:11.550235  601803 network_create.go:289] output of [docker network inspect ha-671025-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-671025-m04 not found
	
	** /stderr **
	I0917 00:30:11.550308  601803 client.go:171] duration metric: took 6.135254905s to LocalClient.Create
	I0917 00:30:13.551428  601803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:30:13.551501  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:13.571158  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:13.571282  601803 retry.go:31] will retry after 363.815793ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:13.935646  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:13.956342  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:13.956492  601803 retry.go:31] will retry after 222.762004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:14.179976  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:14.199764  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:14.199897  601803 retry.go:31] will retry after 789.112623ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:14.989912  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:15.009046  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:30:15.009237  601803 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:30:15.009259  601803 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:15.009316  601803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:30:15.009357  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:15.027810  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:15.027939  601803 retry.go:31] will retry after 345.132096ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:15.373438  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:15.393535  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:15.393678  601803 retry.go:31] will retry after 286.241858ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:15.680202  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:15.700464  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:15.700584  601803 retry.go:31] will retry after 515.433505ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:16.216342  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:16.235705  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:30:16.235856  601803 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:30:16.235876  601803 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:16.235888  601803 start.go:128] duration metric: took 10.823047713s to createHost
	I0917 00:30:16.235900  601803 start.go:83] releasing machines lock for "ha-671025-m04", held for 10.823244343s
	W0917 00:30:16.235918  601803 start.go:714] error starting host: creating host: create: creating: prepare kic ssh: container name "ha-671025-m04" state Stopped: log: 2025-09-17T00:30:10.755906368Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:10.755914573Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:10.755917845Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:10.755921071Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:30:16.236310  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:16.254741  601803 stop.go:39] StopHost: ha-671025-m04
	W0917 00:30:16.255141  601803 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0917 00:30:16.257312  601803 out.go:179] * Stopping node "ha-671025-m04"  ...
	I0917 00:30:16.258638  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:16.277306  601803 stop.go:87] host is in state Stopped
	I0917 00:30:16.277386  601803 main.go:141] libmachine: Stopping "ha-671025-m04"...
	I0917 00:30:16.277486  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:16.296900  601803 stop.go:66] stop err: Machine "ha-671025-m04" is already stopped.
	I0917 00:30:16.296939  601803 stop.go:69] host is already stopped
	W0917 00:30:17.297605  601803 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0917 00:30:17.299616  601803 out.go:179] * Deleting "ha-671025-m04" in docker ...
	I0917 00:30:17.301087  601803 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-671025-m04
	I0917 00:30:17.320697  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:17.338297  601803 cli_runner.go:164] Run: docker exec --privileged -t ha-671025-m04 /bin/bash -c "sudo init 0"
	W0917 00:30:17.356821  601803 cli_runner.go:211] docker exec --privileged -t ha-671025-m04 /bin/bash -c "sudo init 0" returned with exit code 1
	I0917 00:30:17.356867  601803 oci.go:659] error shutdown ha-671025-m04: docker exec --privileged -t ha-671025-m04 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container d8a31131303e396c81be948b6fbdd9c04703f8df53d25aa55fcc0bbc60c158df is not running
	I0917 00:30:18.357062  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:18.376169  601803 oci.go:667] container ha-671025-m04 status is Stopped
	I0917 00:30:18.376204  601803 oci.go:679] Successfully shutdown container ha-671025-m04
	I0917 00:30:18.376251  601803 cli_runner.go:164] Run: docker rm -f -v ha-671025-m04
	I0917 00:30:18.400432  601803 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-671025-m04
	W0917 00:30:18.417916  601803 cli_runner.go:211] docker container inspect -f {{.Id}} ha-671025-m04 returned with exit code 1
	I0917 00:30:18.418016  601803 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:30:18.436022  601803 cli_runner.go:164] Run: docker network rm ha-671025
	W0917 00:30:18.454308  601803 cli_runner.go:211] docker network rm ha-671025 returned with exit code 1
	W0917 00:30:18.454468  601803 kic.go:390] failed to remove network (which might be okay) ha-671025: unable to delete a network that is attached to a running container
	W0917 00:30:18.454696  601803 out.go:285] ! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-671025-m04" state Stopped: log: 2025-09-17T00:30:10.755906368Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:10.755914573Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:10.755917845Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:10.755921071Z  Exiting PID 1...: container exited unexpectedly
	! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-671025-m04" state Stopped: log: 2025-09-17T00:30:10.755906368Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:10.755914573Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:10.755917845Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:10.755921071Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:30:18.454716  601803 start.go:729] Will try again in 5 seconds ...
	I0917 00:30:23.457911  601803 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:30:23.458037  601803 start.go:364] duration metric: took 66.901µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:30:23.458079  601803 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0917 00:30:23.458201  601803 start.go:125] createHost starting for "m04" (driver="docker")
	I0917 00:30:23.460302  601803 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:30:23.460455  601803 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:30:23.460484  601803 client.go:168] LocalClient.Create starting
	I0917 00:30:23.460559  601803 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:30:23.460608  601803 main.go:141] libmachine: Decoding PEM data...
	I0917 00:30:23.460623  601803 main.go:141] libmachine: Parsing certificate...
	I0917 00:30:23.460698  601803 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:30:23.460721  601803 main.go:141] libmachine: Decoding PEM data...
	I0917 00:30:23.460732  601803 main.go:141] libmachine: Parsing certificate...
	I0917 00:30:23.460991  601803 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:30:23.480192  601803 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc000c580f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:30:23.480239  601803 kic.go:121] calculated static IP "192.168.49.5" for the "ha-671025-m04" container
	I0917 00:30:23.480293  601803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:30:23.498519  601803 cli_runner.go:164] Run: docker volume create ha-671025-m04 --label name.minikube.sigs.k8s.io=ha-671025-m04 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:30:23.517896  601803 oci.go:103] Successfully created a docker volume ha-671025-m04
	I0917 00:30:23.518021  601803 cli_runner.go:164] Run: docker run --rm --name ha-671025-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m04 --entrypoint /usr/bin/test -v ha-671025-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:30:23.793889  601803 oci.go:107] Successfully prepared a docker volume ha-671025-m04
	I0917 00:30:23.793916  601803 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:30:23.793943  601803 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:30:23.794069  601803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:30:28.354430  601803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.560292665s)
	I0917 00:30:28.354469  601803 kic.go:203] duration metric: took 4.560520275s to extract preloaded images to volume ...
	W0917 00:30:28.354569  601803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:30:28.354606  601803 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:30:28.354650  601803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:30:28.415698  601803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m04 --name ha-671025-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m04 --network ha-671025 --ip 192.168.49.5 --volume ha-671025-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:30:28.693178  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Running}}
	I0917 00:30:28.713981  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:28.734069  601803 cli_runner.go:164] Run: docker exec ha-671025-m04 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:30:28.783154  601803 oci.go:144] the created container "ha-671025-m04" has a running status.
	I0917 00:30:28.783192  601803 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa...
	I0917 00:30:28.875675  601803 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:30:28.875720  601803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:30:29.161585  601803 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:29.182373  601803 cli_runner.go:164] Run: docker inspect ha-671025-m04
	I0917 00:30:29.203133  601803 errors.go:84] Postmortem inspect ("docker inspect ha-671025-m04"): -- stdout --
	[
	    {
	        "Id": "f7f41ef659937f438d5115bb55021315609d0ce4a0d6dd8e0c8fd03f0f7459c7",
	        "Created": "2025-09-17T00:30:28.431926362Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:30:28.470435488Z",
	            "FinishedAt": "2025-09-17T00:30:28.840401541Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f7f41ef659937f438d5115bb55021315609d0ce4a0d6dd8e0c8fd03f0f7459c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7f41ef659937f438d5115bb55021315609d0ce4a0d6dd8e0c8fd03f0f7459c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7f41ef659937f438d5115bb55021315609d0ce4a0d6dd8e0c8fd03f0f7459c7/hosts",
	        "LogPath": "/var/lib/docker/containers/f7f41ef659937f438d5115bb55021315609d0ce4a0d6dd8e0c8fd03f0f7459c7/f7f41ef659937f438d5115bb55021315609d0ce4a0d6dd8e0c8fd03f0f7459c7-json.log",
	        "Name": "/ha-671025-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-671025-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7f41ef659937f438d5115bb55021315609d0ce4a0d6dd8e0c8fd03f0f7459c7",
	                "LowerDir": "/var/lib/docker/overlay2/0c4c197f74b218d0e5e51a01b61d0be9a8b5978dc02cb783e1a7812c4ee40ccc-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c4c197f74b218d0e5e51a01b61d0be9a8b5978dc02cb783e1a7812c4ee40ccc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c4c197f74b218d0e5e51a01b61d0be9a8b5978dc02cb783e1a7812c4ee40ccc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c4c197f74b218d0e5e51a01b61d0be9a8b5978dc02cb783e1a7812c4ee40ccc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-671025-m04",
	                "Source": "/var/lib/docker/volumes/ha-671025-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025-m04",
	                "name.minikube.sigs.k8s.io": "ha-671025-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025-m04",
	                        "f7f41ef65993"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0917 00:30:29.203217  601803 cli_runner.go:164] Run: docker logs --timestamps --details ha-671025-m04
	I0917 00:30:29.225801  601803 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-671025-m04"): -- stdout --
	2025-09-17T00:30:28.685891580Z  + userns=
	2025-09-17T00:30:28.685930562Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-17T00:30:28.688399467Z  + validate_userns
	2025-09-17T00:30:28.688419432Z  + [[ -z '' ]]
	2025-09-17T00:30:28.688423154Z  + return
	2025-09-17T00:30:28.688425850Z  + configure_containerd
	2025-09-17T00:30:28.688428932Z  + local snapshotter=
	2025-09-17T00:30:28.688465400Z  + [[ -n '' ]]
	2025-09-17T00:30:28.688469691Z  + [[ -z '' ]]
	2025-09-17T00:30:28.689008947Z  ++ stat -f -c %T /kind
	2025-09-17T00:30:28.690481060Z  + container_filesystem=overlayfs
	2025-09-17T00:30:28.690501387Z  + [[ overlayfs == \z\f\s ]]
	2025-09-17T00:30:28.690505882Z  + [[ -n '' ]]
	2025-09-17T00:30:28.690533739Z  + configure_proxy
	2025-09-17T00:30:28.690548208Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-17T00:30:28.694833139Z  + [[ ! -z '' ]]
	2025-09-17T00:30:28.694858455Z  + cat
	2025-09-17T00:30:28.696105609Z  + fix_mount
	2025-09-17T00:30:28.696126671Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-17T00:30:28.696130362Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-17T00:30:28.696505391Z  ++ which mount
	2025-09-17T00:30:28.698022161Z  ++ which umount
	2025-09-17T00:30:28.699096654Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-17T00:30:28.706310009Z  ++ which mount
	2025-09-17T00:30:28.707887751Z  ++ which umount
	2025-09-17T00:30:28.709081831Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-17T00:30:28.710845317Z  +++ which mount
	2025-09-17T00:30:28.711870261Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-17T00:30:28.713140448Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-17T00:30:28.713166905Z  + echo 'INFO: remounting /sys read-only'
	2025-09-17T00:30:28.713170954Z  INFO: remounting /sys read-only
	2025-09-17T00:30:28.713173851Z  + mount -o remount,ro /sys
	2025-09-17T00:30:28.715133214Z  + echo 'INFO: making mounts shared'
	2025-09-17T00:30:28.715151594Z  INFO: making mounts shared
	2025-09-17T00:30:28.715155039Z  + mount --make-rshared /
	2025-09-17T00:30:28.716683320Z  + retryable_fix_cgroup
	2025-09-17T00:30:28.717081030Z  ++ seq 0 10
	2025-09-17T00:30:28.718148256Z  + for i in $(seq 0 10)
	2025-09-17T00:30:28.718171019Z  + fix_cgroup
	2025-09-17T00:30:28.718175174Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-17T00:30:28.718178318Z  + echo 'INFO: detected cgroup v2'
	2025-09-17T00:30:28.718181043Z  INFO: detected cgroup v2
	2025-09-17T00:30:28.718200328Z  + return
	2025-09-17T00:30:28.718207559Z  + return
	2025-09-17T00:30:28.718210913Z  + fix_machine_id
	2025-09-17T00:30:28.718213562Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-17T00:30:28.718217031Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-17T00:30:28.718220542Z  + rm -f /etc/machine-id
	2025-09-17T00:30:28.719382565Z  + systemd-machine-id-setup
	2025-09-17T00:30:28.723069892Z  Initializing machine ID from random generator.
	2025-09-17T00:30:28.725718064Z  + fix_product_name
	2025-09-17T00:30:28.725734609Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-17T00:30:28.725738350Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-17T00:30:28.725741623Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-17T00:30:28.725744546Z  + echo kind
	2025-09-17T00:30:28.726977443Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-17T00:30:28.728916713Z  + fix_product_uuid
	2025-09-17T00:30:28.728934841Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-17T00:30:28.728938543Z  + cat /proc/sys/kernel/random/uuid
	2025-09-17T00:30:28.730039154Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-17T00:30:28.730054695Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-17T00:30:28.730057460Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-17T00:30:28.730059872Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-17T00:30:28.731690417Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-17T00:30:28.731708041Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-17T00:30:28.731712312Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-17T00:30:28.731715436Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-17T00:30:28.733340680Z  + select_iptables
	2025-09-17T00:30:28.733357692Z  + local mode num_legacy_lines num_nft_lines
	2025-09-17T00:30:28.734359714Z  ++ grep -c '^-'
	2025-09-17T00:30:28.737165652Z  ++ true
	2025-09-17T00:30:28.737463009Z  + num_legacy_lines=0
	2025-09-17T00:30:28.738439923Z  ++ grep -c '^-'
	2025-09-17T00:30:28.744487885Z  + num_nft_lines=6
	2025-09-17T00:30:28.744509313Z  + '[' 0 -ge 6 ']'
	2025-09-17T00:30:28.744512999Z  + mode=nft
	2025-09-17T00:30:28.744515724Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-17T00:30:28.744518657Z  INFO: setting iptables to detected mode: nft
	2025-09-17T00:30:28.744521349Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:30:28.744579918Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:30:28.744592806Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-17T00:30:28.744977654Z  ++ seq 0 15
	2025-09-17T00:30:28.745870966Z  + for i in $(seq 0 15)
	2025-09-17T00:30:28.745886874Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-17T00:30:28.747207749Z  + return
	2025-09-17T00:30:28.747222965Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:30:28.747226593Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:30:28.747229541Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-17T00:30:28.747688204Z  ++ seq 0 15
	2025-09-17T00:30:28.748547160Z  + for i in $(seq 0 15)
	2025-09-17T00:30:28.748560277Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-17T00:30:28.749799024Z  + return
	2025-09-17T00:30:28.749816040Z  + enable_network_magic
	2025-09-17T00:30:28.749828863Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-17T00:30:28.749832446Z  + local docker_host_ip
	2025-09-17T00:30:28.751121876Z  ++ cut '-d ' -f1
	2025-09-17T00:30:28.751324653Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:30:28.751337788Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-17T00:30:28.799181688Z  + docker_host_ip=
	2025-09-17T00:30:28.799286138Z  + [[ -z '' ]]
	2025-09-17T00:30:28.799828351Z  ++ ip -4 route show default
	2025-09-17T00:30:28.799917795Z  ++ cut '-d ' -f3
	2025-09-17T00:30:28.801938766Z  + docker_host_ip=192.168.49.1
	2025-09-17T00:30:28.802304071Z  + iptables-save
	2025-09-17T00:30:28.802733181Z  + iptables-restore
	2025-09-17T00:30:28.805324663Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-17T00:30:28.814017282Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-17T00:30:28.815981930Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-17T00:30:28.817217556Z  + replaced='# Generated by Docker Engine.
	2025-09-17T00:30:28.817232612Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:30:28.817235084Z  # has been modified.
	2025-09-17T00:30:28.817236938Z  
	2025-09-17T00:30:28.817238706Z  nameserver 192.168.49.1
	2025-09-17T00:30:28.817240608Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:30:28.817242529Z  options edns0 trust-ad ndots:0
	2025-09-17T00:30:28.817253082Z  
	2025-09-17T00:30:28.817254833Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:30:28.817256724Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:30:28.817258387Z  # Overrides: []
	2025-09-17T00:30:28.817260107Z  # Option ndots from: internal'
	2025-09-17T00:30:28.817261768Z  + [[ '' == '' ]]
	2025-09-17T00:30:28.817268189Z  + echo '# Generated by Docker Engine.
	2025-09-17T00:30:28.817270193Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-17T00:30:28.817272045Z  # has been modified.
	2025-09-17T00:30:28.817273874Z  
	2025-09-17T00:30:28.817276292Z  nameserver 192.168.49.1
	2025-09-17T00:30:28.817279214Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-17T00:30:28.817282424Z  options edns0 trust-ad ndots:0
	2025-09-17T00:30:28.817285217Z  
	2025-09-17T00:30:28.817287686Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-17T00:30:28.817290551Z  # ExtServers: [host(127.0.0.53)]
	2025-09-17T00:30:28.817293577Z  # Overrides: []
	2025-09-17T00:30:28.817296367Z  # Option ndots from: internal'
	2025-09-17T00:30:28.817414571Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-17T00:30:28.817430574Z  + local files_to_update
	2025-09-17T00:30:28.817434175Z  + local should_fix_certificate=false
	2025-09-17T00:30:28.818681508Z  ++ cut '-d ' -f1
	2025-09-17T00:30:28.818736574Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:30:28.819294476Z  ++++ hostname
	2025-09-17T00:30:28.820155352Z  +++ timeout 5 getent ahostsv4 ha-671025-m04
	2025-09-17T00:30:28.822888012Z  + curr_ipv4=192.168.49.5
	2025-09-17T00:30:28.822903543Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-17T00:30:28.822906645Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-17T00:30:28.822909215Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-17T00:30:28.822911970Z  + [[ -n 192.168.49.5 ]]
	2025-09-17T00:30:28.822914735Z  + echo -n 192.168.49.5
	2025-09-17T00:30:28.824211251Z  ++ cut '-d ' -f1
	2025-09-17T00:30:28.824229319Z  ++ head -n1 /dev/fd/63
	2025-09-17T00:30:28.824816753Z  ++++ hostname
	2025-09-17T00:30:28.825608664Z  +++ timeout 5 getent ahostsv6 ha-671025-m04
	2025-09-17T00:30:28.828302473Z  + curr_ipv6=
	2025-09-17T00:30:28.828321267Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-17T00:30:28.828336103Z  INFO: Detected IPv6 address: 
	2025-09-17T00:30:28.828338656Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-17T00:30:28.828340471Z  + [[ -n '' ]]
	2025-09-17T00:30:28.828342629Z  + false
	2025-09-17T00:30:28.828827938Z  ++ uname -a
	2025-09-17T00:30:28.829651843Z  + echo 'entrypoint completed: Linux ha-671025-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-17T00:30:28.829667925Z  entrypoint completed: Linux ha-671025-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-17T00:30:28.829672059Z  + exec /sbin/init
	2025-09-17T00:30:28.836352483Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-17T00:30:28.836371259Z  Detected virtualization docker.
	2025-09-17T00:30:28.836373827Z  Detected architecture x86-64.
	2025-09-17T00:30:28.836459947Z  
	2025-09-17T00:30:28.836477161Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-17T00:30:28.836481441Z  
	2025-09-17T00:30:28.837009209Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:28.837020266Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:28.837023631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:28.837026806Z  Exiting PID 1...
	
	-- /stdout --
	I0917 00:30:29.225894  601803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:30:29.287073  601803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:30:29.277009512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:30:29.287194  601803 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 00:30:29.277009512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fals
e Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:30:29.287314  601803 network_create.go:284] running [docker network inspect ha-671025-m04] to gather additional debugging logs...
	I0917 00:30:29.287343  601803 cli_runner.go:164] Run: docker network inspect ha-671025-m04
	W0917 00:30:29.306535  601803 cli_runner.go:211] docker network inspect ha-671025-m04 returned with exit code 1
	I0917 00:30:29.306566  601803 network_create.go:287] error running [docker network inspect ha-671025-m04]: docker network inspect ha-671025-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-671025-m04 not found
	I0917 00:30:29.306577  601803 network_create.go:289] output of [docker network inspect ha-671025-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-671025-m04 not found
	
	** /stderr **
	I0917 00:30:29.306644  601803 client.go:171] duration metric: took 5.846149239s to LocalClient.Create
	I0917 00:30:31.307565  601803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:30:31.307664  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:31.326518  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:31.326669  601803 retry.go:31] will retry after 141.117331ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:31.468091  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:31.487158  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:31.487287  601803 retry.go:31] will retry after 435.149827ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:31.922571  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:31.942790  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:31.942917  601803 retry.go:31] will retry after 706.634667ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:32.650699  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:32.672680  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:30:32.672833  601803 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:30:32.672856  601803 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:32.672912  601803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:30:32.672955  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:32.692848  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:32.692954  601803 retry.go:31] will retry after 156.30533ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:32.850534  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:32.870191  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:32.870321  601803 retry.go:31] will retry after 454.930332ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:33.326159  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:33.346042  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:30:33.346183  601803 retry.go:31] will retry after 513.866428ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:33.860856  601803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:30:33.881227  601803 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:30:33.881349  601803 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:30:33.881364  601803 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:30:33.881376  601803 start.go:128] duration metric: took 10.423168322s to createHost
	I0917 00:30:33.881386  601803 start.go:83] releasing machines lock for "ha-671025-m04", held for 10.423335545s
	W0917 00:30:33.881492  601803 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-671025" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-671025-m04" state Stopped: log: 2025-09-17T00:30:28.837009209Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:28.837020266Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:28.837023631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:28.837026806Z  Exiting PID 1...: container exited unexpectedly
	* Failed to start docker container. Running "minikube delete -p ha-671025" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-671025-m04" state Stopped: log: 2025-09-17T00:30:28.837009209Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:28.837020266Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:28.837023631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:28.837026806Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:30:33.884087  601803 out.go:203] 
	W0917 00:30:33.885624  601803 out.go:285] X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-671025-m04" state Stopped: log: 2025-09-17T00:30:28.837009209Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:28.837020266Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:28.837023631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:28.837026806Z  Exiting PID 1...: container exited unexpectedly
	X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-671025-m04" state Stopped: log: 2025-09-17T00:30:28.837009209Z  Failed to create control group inotify object: Too many open files
	2025-09-17T00:30:28.837020266Z  Failed to allocate manager object: Too many open files
	2025-09-17T00:30:28.837023631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-17T00:30:28.837026806Z  Exiting PID 1...: container exited unexpectedly
	I0917 00:30:33.887282  601803 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-671025 node add --alsologtostderr -v 5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 591894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:28:07.642349633Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2947b2c900e461fedf4c1b14afccf677c0bbbd5856a737563908fb819f368e69",
	            "SandboxKey": "/var/run/docker/netns/2947b2c900e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:4e:63:a1:43:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "e04f7d855de79c251547e2cb959967e0ee3cd816f6030c7dc40e9731e31f953c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.271864552s)
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-836309 image ls --format table --alsologtostderr                                                               │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ image   │ functional-836309 image ls                                                                                                │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:21 UTC │ 17 Sep 25 00:21 UTC │
	│ delete  │ -p functional-836309                                                                                                      │ functional-836309 │ jenkins │ v1.37.0 │ 17 Sep 25 00:27 UTC │ 17 Sep 25 00:28 UTC │
	│ start   │ ha-671025 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio           │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:28 UTC │ 17 Sep 25 00:29 UTC │
	│ kubectl │ ha-671025 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ kubectl │ ha-671025 kubectl -- rollout status deployment/busybox                                                                    │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- nslookup kubernetes.io                                              │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- nslookup kubernetes.io                                              │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- nslookup kubernetes.io                                              │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- nslookup kubernetes.default                                         │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- nslookup kubernetes.default                                         │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- nslookup kubernetes.default                                         │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- nslookup kubernetes.default.svc.cluster.local                       │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- nslookup kubernetes.default.svc.cluster.local                       │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- sh -c ping -c 1 192.168.49.1                                        │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- sh -c ping -c 1 192.168.49.1                                        │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ kubectl │ ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- sh -c ping -c 1 192.168.49.1                                        │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ node    │ ha-671025 node add --alsologtostderr -v 5                                                                                 │ ha-671025         │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:28:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:28:02.421105  591333 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:28:02.421342  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421350  591333 out.go:374] Setting ErrFile to fd 2...
	I0917 00:28:02.421355  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421569  591333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:28:02.422069  591333 out.go:368] Setting JSON to false
	I0917 00:28:02.422989  591333 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11425,"bootTime":1758057457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:28:02.423098  591333 start.go:140] virtualization: kvm guest
	I0917 00:28:02.425200  591333 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:28:02.426666  591333 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:28:02.426650  591333 notify.go:220] Checking for updates...
	I0917 00:28:02.429221  591333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:28:02.430609  591333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:02.431832  591333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:28:02.433241  591333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:28:02.434707  591333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:28:02.436048  591333 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:28:02.460585  591333 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:28:02.460765  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.517630  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.506821705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.517750  591333 docker.go:318] overlay module found
	I0917 00:28:02.519568  591333 out.go:179] * Using the docker driver based on user configuration
	I0917 00:28:02.520915  591333 start.go:304] selected driver: docker
	I0917 00:28:02.520935  591333 start.go:918] validating driver "docker" against <nil>
	I0917 00:28:02.520951  591333 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:28:02.521682  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.578543  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.56897484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.578724  591333 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:28:02.578937  591333 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:28:02.580907  591333 out.go:179] * Using Docker driver with root privileges
	I0917 00:28:02.582377  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:02.582477  591333 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 00:28:02.582493  591333 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 00:28:02.582574  591333 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:02.583947  591333 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:28:02.585129  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:02.586454  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:02.587786  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:02.587830  591333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:28:02.587838  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:02.587843  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:02.587944  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:02.587958  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:02.588350  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:02.588379  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json: {Name:mk091aa75e831ff22299b49a9817446c9f212399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:02.609265  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:02.609287  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:02.609305  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:02.609329  591333 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:02.609454  591333 start.go:364] duration metric: took 102.584µs to acquireMachinesLock for "ha-671025"
	I0917 00:28:02.609482  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:02.609540  591333 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:28:02.611610  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:02.611847  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:02.611880  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:02.611969  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:02.612007  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612019  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612089  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:02.612110  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612122  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612504  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:28:02.630138  591333 cli_runner.go:211] docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:28:02.630214  591333 network_create.go:284] running [docker network inspect ha-671025] to gather additional debugging logs...
	I0917 00:28:02.630235  591333 cli_runner.go:164] Run: docker network inspect ha-671025
	W0917 00:28:02.647610  591333 cli_runner.go:211] docker network inspect ha-671025 returned with exit code 1
	I0917 00:28:02.647648  591333 network_create.go:287] error running [docker network inspect ha-671025]: docker network inspect ha-671025: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-671025 not found
	I0917 00:28:02.647665  591333 network_create.go:289] output of [docker network inspect ha-671025]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-671025 not found
	
	** /stderr **
	I0917 00:28:02.647783  591333 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:02.666874  591333 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014926f0}
	I0917 00:28:02.666937  591333 network_create.go:124] attempt to create docker network ha-671025 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 00:28:02.666993  591333 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-671025 ha-671025
	I0917 00:28:02.726570  591333 network_create.go:108] docker network ha-671025 192.168.49.0/24 created
	I0917 00:28:02.726603  591333 kic.go:121] calculated static IP "192.168.49.2" for the "ha-671025" container
	I0917 00:28:02.726684  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:02.744335  591333 cli_runner.go:164] Run: docker volume create ha-671025 --label name.minikube.sigs.k8s.io=ha-671025 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:02.765618  591333 oci.go:103] Successfully created a docker volume ha-671025
	I0917 00:28:02.765710  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --entrypoint /usr/bin/test -v ha-671025:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:03.152134  591333 oci.go:107] Successfully prepared a docker volume ha-671025
	I0917 00:28:03.152201  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:03.152229  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:03.152307  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:07.519336  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.366963199s)
	I0917 00:28:07.519373  591333 kic.go:203] duration metric: took 4.3671415s to extract preloaded images to volume ...
	W0917 00:28:07.519497  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:07.519557  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:07.519606  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:07.583258  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025 --name ha-671025 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025 --network ha-671025 --ip 192.168.49.2 --volume ha-671025:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:07.861983  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Running}}
	I0917 00:28:07.881740  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:07.902486  591333 cli_runner.go:164] Run: docker exec ha-671025 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:07.957445  591333 oci.go:144] the created container "ha-671025" has a running status.
	I0917 00:28:07.957491  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa...
	I0917 00:28:07.970221  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:07.970277  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:07.996810  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.018618  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:08.018648  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:08.065859  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.088307  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:08.088464  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:08.112791  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:08.113142  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:08.113159  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:08.114236  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41092->127.0.0.1:33148: read: connection reset by peer
	I0917 00:28:11.250841  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.250869  591333 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:28:11.250946  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.270326  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.270573  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.270589  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:28:11.422194  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.422282  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.441086  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.441373  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.441412  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:11.579534  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:11.579570  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:11.579606  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:11.579621  591333 provision.go:84] configureAuth start
	I0917 00:28:11.579696  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:11.598338  591333 provision.go:143] copyHostCerts
	I0917 00:28:11.598381  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598438  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:11.598450  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598528  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:11.598637  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598660  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:11.598668  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598709  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:11.598793  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598818  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:11.598827  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598863  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:11.598936  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:28:11.692056  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:11.692126  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:11.692177  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.710836  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:11.809661  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:11.809738  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:11.838472  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:11.838547  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:28:11.864972  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:11.865064  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:11.892502  591333 provision.go:87] duration metric: took 312.863604ms to configureAuth
	I0917 00:28:11.892539  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:11.892749  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:11.892876  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.911894  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.912108  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.912123  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:12.156893  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:12.156918  591333 machine.go:96] duration metric: took 4.068577091s to provisionDockerMachine
	I0917 00:28:12.156929  591333 client.go:171] duration metric: took 9.545042483s to LocalClient.Create
	I0917 00:28:12.156950  591333 start.go:167] duration metric: took 9.54510971s to libmachine.API.Create "ha-671025"
	I0917 00:28:12.156957  591333 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:28:12.156965  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:12.157043  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:12.157079  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.175648  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.275414  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:12.279194  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:12.279224  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:12.279231  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:12.279238  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:12.279255  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:12.279317  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:12.279416  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:12.279430  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:12.279530  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:12.288873  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:12.317418  591333 start.go:296] duration metric: took 160.444141ms for postStartSetup
	I0917 00:28:12.317811  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.336261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:12.336565  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:12.336607  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.354705  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.446983  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:12.451593  591333 start.go:128] duration metric: took 9.842036225s to createHost
	I0917 00:28:12.451634  591333 start.go:83] releasing machines lock for "ha-671025", held for 9.842165682s
	I0917 00:28:12.451714  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.469798  591333 ssh_runner.go:195] Run: cat /version.json
	I0917 00:28:12.469852  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.469869  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:12.469931  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.489508  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.489501  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.581676  591333 ssh_runner.go:195] Run: systemctl --version
	I0917 00:28:12.654927  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:12.796661  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:12.802016  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.827191  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:12.827278  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.858197  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:12.858222  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:12.858256  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:12.858306  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:12.874462  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:12.887158  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:12.887226  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:12.902417  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:12.917174  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:12.986628  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:13.060583  591333 docker.go:234] disabling docker service ...
	I0917 00:28:13.060656  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:13.081466  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:13.094012  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:13.164943  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:13.315404  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:13.328708  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:13.347694  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:13.347757  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.361221  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:13.361294  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.371972  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.382985  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.394505  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:13.405096  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.416205  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.434282  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.445654  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:13.454948  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:13.464245  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:13.526087  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:13.629597  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:13.629677  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:13.634535  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:13.634599  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:13.639122  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:13.675949  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:13.676043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.713216  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.752386  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:13.753755  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:13.771156  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:13.775524  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:13.788890  591333 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:28:13.789115  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:13.789184  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.863780  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.863811  591333 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:28:13.863873  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.900999  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.901021  591333 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:28:13.901028  591333 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:28:13.901149  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:13.901218  591333 ssh_runner.go:195] Run: crio config
	I0917 00:28:13.947330  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:13.947354  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:13.947367  591333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:28:13.947398  591333 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:28:13.947540  591333 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:28:13.947571  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:13.947618  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:13.962176  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:13.962288  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:13.962356  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:13.972318  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:13.972409  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:28:13.982775  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:28:14.003185  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:14.025114  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:28:14.043893  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0917 00:28:14.063914  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:14.067851  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:14.079495  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:14.146352  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:14.170001  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:28:14.170029  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:14.170049  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.170209  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:14.170248  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:14.170258  591333 certs.go:256] generating profile certs ...
	I0917 00:28:14.170312  591333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:14.170334  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt with IP's: []
	I0917 00:28:14.258881  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt ...
	I0917 00:28:14.258912  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt: {Name:mkf356a325e81df463620a9a59f1e19636a8bbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259129  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key ...
	I0917 00:28:14.259150  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key: {Name:mka2338ec2b6b28954ea0ef14eeb3d06111be43d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259268  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444
	I0917 00:28:14.259285  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0917 00:28:14.420479  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 ...
	I0917 00:28:14.420509  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444: {Name:mkcf98c32344d33f146459467ae0b529b09930e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420720  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 ...
	I0917 00:28:14.420744  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444: {Name:mk2a9dddb825d571b4beb46eeddb7582f0b5a38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420868  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:14.420963  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:14.421066  591333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:14.421086  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt with IP's: []
	I0917 00:28:14.667928  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt ...
	I0917 00:28:14.667965  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt: {Name:mk8fc3d9cf0ef31fe8163e3202ec93ff4212c0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668186  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key ...
	I0917 00:28:14.668205  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key: {Name:mk4aadb37423b11008cecd193572dcb26f4156f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668320  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:14.668341  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:14.668351  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:14.668364  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:14.668375  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:14.668386  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:14.668408  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:14.668420  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:14.668487  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:14.668524  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:14.668533  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:14.668554  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:14.668631  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:14.668666  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:14.668710  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:14.668747  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:14.668764  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:14.668780  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.669300  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:14.695942  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:14.721853  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:14.746954  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:14.773182  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:28:14.798782  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:14.823720  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:14.847907  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:14.872531  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:14.900554  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:14.925365  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:14.953903  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:28:14.973565  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:14.979257  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:14.989070  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992786  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992847  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.999827  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:15.009762  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:15.019180  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022635  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022690  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.029591  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:15.039107  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:15.048628  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052181  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052230  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.058893  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:15.069771  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:15.073670  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:15.073738  591333 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:15.073818  591333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:28:15.073904  591333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:28:15.110504  591333 cri.go:89] found id: ""
	I0917 00:28:15.110589  591333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:28:15.119903  591333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:28:15.129328  591333 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:28:15.129384  591333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:28:15.138492  591333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:28:15.138510  591333 kubeadm.go:157] found existing configuration files:
	
	I0917 00:28:15.138563  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:28:15.147903  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:28:15.147969  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:28:15.157062  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:28:15.166583  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:28:15.166646  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:28:15.176378  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.185922  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:28:15.185988  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.195234  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:28:15.204565  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:28:15.204624  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:28:15.213513  591333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:28:15.268809  591333 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 00:28:15.322273  591333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:28:25.344526  591333 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:28:25.344586  591333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:28:25.344654  591333 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:28:25.344699  591333 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 00:28:25.344758  591333 kubeadm.go:310] OS: Linux
	I0917 00:28:25.344813  591333 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:28:25.344864  591333 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:28:25.344910  591333 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:28:25.344953  591333 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:28:25.345000  591333 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:28:25.345048  591333 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:28:25.345119  591333 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:28:25.345192  591333 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 00:28:25.345263  591333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:28:25.345346  591333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:28:25.345452  591333 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:28:25.345508  591333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:28:25.347069  591333 out.go:252]   - Generating certificates and keys ...
	I0917 00:28:25.347143  591333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:28:25.347233  591333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:28:25.347311  591333 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:28:25.347369  591333 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:28:25.347468  591333 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:28:25.347518  591333 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:28:25.347562  591333 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:28:25.347663  591333 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.347707  591333 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:28:25.347846  591333 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.348037  591333 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:28:25.348142  591333 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:28:25.348209  591333 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:28:25.348278  591333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:28:25.348323  591333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:28:25.348380  591333 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:28:25.348445  591333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:28:25.348531  591333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:28:25.348623  591333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:28:25.348735  591333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:28:25.348831  591333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:28:25.351075  591333 out.go:252]   - Booting up control plane ...
	I0917 00:28:25.351182  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:28:25.351283  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:28:25.351361  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:28:25.351548  591333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:28:25.351700  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:28:25.351849  591333 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:28:25.351934  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:28:25.351970  591333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:28:25.352082  591333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:28:25.352189  591333 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:28:25.352283  591333 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00103693s
	I0917 00:28:25.352386  591333 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:28:25.352498  591333 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0917 00:28:25.352576  591333 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:28:25.352659  591333 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:28:25.352745  591333 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.008701955s
	I0917 00:28:25.352807  591333 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.208053254s
	I0917 00:28:25.352891  591333 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.501882009s
	I0917 00:28:25.352984  591333 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:28:25.353099  591333 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:28:25.353159  591333 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:28:25.353326  591333 kubeadm.go:310] [mark-control-plane] Marking the node ha-671025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:28:25.353381  591333 kubeadm.go:310] [bootstrap-token] Using token: 945t58.lx3tewj0v31y7u2l
	I0917 00:28:25.354623  591333 out.go:252]   - Configuring RBAC rules ...
	I0917 00:28:25.354715  591333 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:28:25.354845  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:28:25.355014  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:28:25.355187  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:28:25.355345  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:28:25.355454  591333 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:28:25.355574  591333 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:28:25.355621  591333 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:28:25.355662  591333 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:28:25.355668  591333 kubeadm.go:310] 
	I0917 00:28:25.355718  591333 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:28:25.355727  591333 kubeadm.go:310] 
	I0917 00:28:25.355804  591333 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:28:25.355810  591333 kubeadm.go:310] 
	I0917 00:28:25.355831  591333 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:28:25.355911  591333 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:28:25.355972  591333 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:28:25.355979  591333 kubeadm.go:310] 
	I0917 00:28:25.356051  591333 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:28:25.356065  591333 kubeadm.go:310] 
	I0917 00:28:25.356135  591333 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:28:25.356143  591333 kubeadm.go:310] 
	I0917 00:28:25.356220  591333 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:28:25.356331  591333 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:28:25.356455  591333 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:28:25.356470  591333 kubeadm.go:310] 
	I0917 00:28:25.356549  591333 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:28:25.356635  591333 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:28:25.356643  591333 kubeadm.go:310] 
	I0917 00:28:25.356717  591333 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.356829  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 00:28:25.356858  591333 kubeadm.go:310] 	--control-plane 
	I0917 00:28:25.356865  591333 kubeadm.go:310] 
	I0917 00:28:25.356941  591333 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:28:25.356947  591333 kubeadm.go:310] 
	I0917 00:28:25.357048  591333 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.357188  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 00:28:25.357207  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:25.357216  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:25.358901  591333 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 00:28:25.360097  591333 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 00:28:25.364931  591333 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 00:28:25.364953  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 00:28:25.387094  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 00:28:25.613643  591333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:28:25.613728  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:25.613746  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025 minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=true
	I0917 00:28:25.624073  591333 ops.go:34] apiserver oom_adj: -16
	I0917 00:28:25.696361  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.196672  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.696850  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.197218  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.696539  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.196491  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.696543  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.196814  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.696595  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.196581  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.273337  591333 kubeadm.go:1105] duration metric: took 4.659672583s to wait for elevateKubeSystemPrivileges
	I0917 00:28:30.273483  591333 kubeadm.go:394] duration metric: took 15.19974193s to StartCluster
	I0917 00:28:30.273523  591333 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.273607  591333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:30.274607  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.274913  591333 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.274945  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:28:30.274948  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:28:30.274965  591333 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:28:30.275045  591333 addons.go:69] Setting storage-provisioner=true in profile "ha-671025"
	I0917 00:28:30.275085  591333 addons.go:238] Setting addon storage-provisioner=true in "ha-671025"
	I0917 00:28:30.275129  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.275048  591333 addons.go:69] Setting default-storageclass=true in profile "ha-671025"
	I0917 00:28:30.275164  591333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-671025"
	I0917 00:28:30.275205  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.275523  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.275665  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.298018  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:28:30.298668  591333 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:28:30.298695  591333 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:28:30.298702  591333 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:28:30.298708  591333 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:28:30.298714  591333 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:28:30.298802  591333 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:28:30.299193  591333 addons.go:238] Setting addon default-storageclass=true in "ha-671025"
	I0917 00:28:30.299247  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.299354  591333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:28:30.299784  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.300585  591333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.300605  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:28:30.300669  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.319752  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.321070  591333 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.321101  591333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:28:30.321165  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.347717  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.362789  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:28:30.443108  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.467358  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.541692  591333 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 00:28:30.788755  591333 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:28:30.790283  591333 addons.go:514] duration metric: took 515.302961ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:28:30.790337  591333 start.go:246] waiting for cluster config update ...
	I0917 00:28:30.790355  591333 start.go:255] writing updated cluster config ...
	I0917 00:28:30.792167  591333 out.go:203] 
	I0917 00:28:30.794434  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.794553  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.797029  591333 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:28:30.798740  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:30.800340  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:30.801532  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:30.801576  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:30.801656  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:30.801701  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:30.801721  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:30.801837  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.826923  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:30.826950  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:30.826970  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:30.827006  591333 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:30.827168  591333 start.go:364] duration metric: took 135.604µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:28:30.827198  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.827285  591333 start.go:125] createHost starting for "m02" (driver="docker")
	I0917 00:28:30.829869  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:30.830019  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:30.830056  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:30.830117  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:30.830162  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830180  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830241  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:30.830266  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830274  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830527  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:30.850687  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc0018d10b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:28:30.850727  591333 kic.go:121] calculated static IP "192.168.49.3" for the "ha-671025-m02" container
	I0917 00:28:30.850801  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:30.869737  591333 cli_runner.go:164] Run: docker volume create ha-671025-m02 --label name.minikube.sigs.k8s.io=ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:30.890468  591333 oci.go:103] Successfully created a docker volume ha-671025-m02
	I0917 00:28:30.890596  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --entrypoint /usr/bin/test -v ha-671025-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:31.278702  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m02
	I0917 00:28:31.278750  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:31.278777  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:31.278882  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:35.682273  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.403350864s)
	I0917 00:28:35.682311  591333 kic.go:203] duration metric: took 4.403531688s to extract preloaded images to volume ...
	W0917 00:28:35.682411  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:35.682448  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:35.682488  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:35.742164  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m02 --name ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m02 --network ha-671025 --ip 192.168.49.3 --volume ha-671025-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:36.033045  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Running}}
	I0917 00:28:36.053351  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.072949  591333 cli_runner.go:164] Run: docker exec ha-671025-m02 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:36.126815  591333 oci.go:144] the created container "ha-671025-m02" has a running status.
	I0917 00:28:36.126844  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa...
	I0917 00:28:36.161749  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:36.161792  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:36.189714  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.212082  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:36.212109  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:36.260306  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.282829  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:36.282954  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:36.312073  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:36.312435  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:36.312461  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:36.313226  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47290->127.0.0.1:33153: read: connection reset by peer
	I0917 00:28:39.452508  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.452557  591333 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:28:39.452652  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.472236  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.472561  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.472581  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:28:39.626427  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.626517  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.645919  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.646146  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.646163  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:39.786717  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:39.786756  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:39.786781  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:39.786798  591333 provision.go:84] configureAuth start
	I0917 00:28:39.786974  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:39.807773  591333 provision.go:143] copyHostCerts
	I0917 00:28:39.807815  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807847  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:39.807858  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807932  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:39.808029  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808050  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:39.808054  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808081  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:39.808149  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808167  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:39.808172  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808200  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:39.808255  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:28:39.918454  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:39.918537  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:39.918589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.937978  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.039160  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:40.039233  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:40.069797  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:40.069887  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:28:40.098311  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:40.098408  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:40.127419  591333 provision.go:87] duration metric: took 340.575644ms to configureAuth
	I0917 00:28:40.127458  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:40.127656  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:40.127785  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.147026  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:40.147308  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:40.147331  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:40.409609  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:40.409640  591333 machine.go:96] duration metric: took 4.1267811s to provisionDockerMachine
	I0917 00:28:40.409651  591333 client.go:171] duration metric: took 9.579589798s to LocalClient.Create
	I0917 00:28:40.409674  591333 start.go:167] duration metric: took 9.579655281s to libmachine.API.Create "ha-671025"
	I0917 00:28:40.409684  591333 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:28:40.409696  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:40.409769  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:40.409816  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.431881  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.535836  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:40.540091  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:40.540127  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:40.540134  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:40.540141  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:40.540153  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:40.540203  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:40.540294  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:40.540310  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:40.540600  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:40.551220  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:40.582236  591333 start.go:296] duration metric: took 172.533526ms for postStartSetup
	I0917 00:28:40.582728  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.602550  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:40.602895  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:40.602973  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.625331  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.720887  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:40.725796  591333 start.go:128] duration metric: took 9.898487722s to createHost
	I0917 00:28:40.725827  591333 start.go:83] releasing machines lock for "ha-671025-m02", held for 9.89864483s
	I0917 00:28:40.725898  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.749075  591333 out.go:179] * Found network options:
	I0917 00:28:40.750936  591333 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:28:40.752439  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:28:40.752503  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:28:40.752575  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:40.752624  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.752703  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:40.752776  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.774163  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.775400  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:41.009369  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:41.014989  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.040280  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:41.040373  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.077837  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:41.077864  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:41.077899  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:41.077939  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:41.098363  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:41.112692  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:41.112768  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:41.128481  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:41.145954  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:41.216259  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:41.293618  591333 docker.go:234] disabling docker service ...
	I0917 00:28:41.293683  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:41.314463  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:41.327805  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:41.402097  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:41.515728  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:41.528751  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:41.548638  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:41.548717  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.563770  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:41.563842  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.575236  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.586559  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.599824  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:41.612614  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.624744  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.645749  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.659897  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:41.670457  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:41.680684  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:41.816654  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:41.923179  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:41.923241  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:41.927246  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:41.927309  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:41.931155  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:41.970363  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:41.970470  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.009043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.057831  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:42.059352  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:28:42.061008  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:42.081413  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:42.086716  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:42.100745  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:28:42.100976  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:42.101278  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:42.124810  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:42.125292  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:28:42.125333  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:42.125361  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:42.125545  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:42.125614  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:42.125626  591333 certs.go:256] generating profile certs ...
	I0917 00:28:42.125787  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:42.125831  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c
	I0917 00:28:42.125848  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:28:43.131520  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c ...
	I0917 00:28:43.131559  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c: {Name:mk97bbbbe985039a36a56311ec983801d49afc24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131793  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c ...
	I0917 00:28:43.131814  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c: {Name:mk2a126624b47a1fbca817c2bf7b065e9ee5a854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131938  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:43.132097  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:43.132233  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:43.132252  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:43.132265  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:43.132275  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:43.132286  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:43.132296  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:43.132308  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:43.132318  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:43.132330  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:43.132385  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:43.132425  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:43.132435  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:43.132458  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:43.132480  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:43.132500  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:43.132536  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:43.132561  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.132576  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.132588  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.132646  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:43.152207  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:43.242834  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:28:43.247724  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:28:43.261684  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:28:43.265651  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:28:43.279426  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:28:43.283200  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:28:43.298316  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:28:43.302656  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:28:43.316567  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:28:43.320915  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:28:43.334735  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:28:43.339251  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:28:43.354686  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:43.382622  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:43.411140  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:43.439208  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:43.468797  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 00:28:43.497239  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:43.525628  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:43.552854  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:43.579567  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:43.613480  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:43.640927  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:43.668098  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:28:43.688016  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:28:43.709638  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:28:43.729987  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:28:43.751570  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:28:43.772873  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:28:43.793231  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:28:43.813996  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:43.820372  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:43.831827  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836450  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836601  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.845799  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:43.858335  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:43.870361  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874499  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874557  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.882167  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:43.894006  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:43.906727  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910868  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910926  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.918600  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:43.930014  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:43.933717  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:43.933786  591333 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:28:43.933892  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:43.933920  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:43.933956  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:43.949251  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:43.949348  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:43.949436  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:43.959785  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:43.959858  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:28:43.970815  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:28:43.992525  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:44.016479  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:28:44.038080  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:44.042531  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:44.055802  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:44.123804  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:44.146604  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:44.146887  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:44.146991  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:28:44.147052  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:44.166636  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:44.318607  591333 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:44.318654  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0917 00:29:01.319807  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.001126344s)
	I0917 00:29:01.319840  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:01.532514  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m02 minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:01.623743  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:01.704118  591333 start.go:319] duration metric: took 17.557224287s to joinCluster
	I0917 00:29:01.704207  591333 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:01.704539  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:01.705687  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:01.707014  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:01.810630  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:01.824161  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:01.824231  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:01.824550  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	W0917 00:29:03.828446  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:05.829871  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:08.329045  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:10.828964  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:13.328972  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:15.828569  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	I0917 00:29:16.328859  591333 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:29:16.328891  591333 node_ready.go:38] duration metric: took 14.504319776s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:29:16.328908  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:16.328959  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:16.341005  591333 api_server.go:72] duration metric: took 14.636761134s to wait for apiserver process to appear ...
	I0917 00:29:16.341029  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:16.341048  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:16.345248  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:16.346148  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:16.346174  591333 api_server.go:131] duration metric: took 5.137742ms to wait for apiserver health ...
	I0917 00:29:16.346183  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:16.351147  591333 system_pods.go:59] 17 kube-system pods found
	I0917 00:29:16.351175  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.351180  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.351184  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.351187  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.351190  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.351194  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.351198  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.351203  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.351206  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.351210  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.351213  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.351216  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.351219  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.351222  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.351225  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.351227  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.351230  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.351235  591333 system_pods.go:74] duration metric: took 5.047428ms to wait for pod list to return data ...
	I0917 00:29:16.351245  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:16.354087  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:16.354107  591333 default_sa.go:55] duration metric: took 2.857135ms for default service account to be created ...
	I0917 00:29:16.354115  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:16.357519  591333 system_pods.go:86] 17 kube-system pods found
	I0917 00:29:16.357544  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.357550  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.357555  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.357560  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.357565  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.357570  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.357576  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.357582  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.357591  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.357599  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.357605  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.357611  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.357614  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.357619  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.357623  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.357630  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.357633  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.357642  591333 system_pods.go:126] duration metric: took 3.522377ms to wait for k8s-apps to be running ...
	I0917 00:29:16.357652  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:16.357710  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:16.370259  591333 system_svc.go:56] duration metric: took 12.594604ms WaitForService to wait for kubelet
	I0917 00:29:16.370292  591333 kubeadm.go:578] duration metric: took 14.666051199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:16.370351  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:16.373484  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373509  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373526  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373531  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373545  591333 node_conditions.go:105] duration metric: took 3.187263ms to run NodePressure ...
	I0917 00:29:16.373563  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:16.373599  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:16.375540  591333 out.go:203] 
	I0917 00:29:16.376982  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:16.377123  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.378689  591333 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:29:16.380127  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:29:16.381271  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:29:16.382178  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.382203  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:29:16.382278  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:29:16.382305  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:29:16.382314  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:29:16.382434  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.405280  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:29:16.405301  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:29:16.405319  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:29:16.405349  591333 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:29:16.405476  591333 start.go:364] duration metric: took 109.564µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:29:16.405502  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:16.405601  591333 start.go:125] createHost starting for "m03" (driver="docker")
	I0917 00:29:16.408212  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:29:16.408326  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:29:16.408364  591333 client.go:168] LocalClient.Create starting
	I0917 00:29:16.408459  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:29:16.408501  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408515  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408569  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:29:16.408588  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408596  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408797  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:16.428129  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc001a2abd0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:29:16.428169  591333 kic.go:121] calculated static IP "192.168.49.4" for the "ha-671025-m03" container
	I0917 00:29:16.428233  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:29:16.447362  591333 cli_runner.go:164] Run: docker volume create ha-671025-m03 --label name.minikube.sigs.k8s.io=ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:29:16.467514  591333 oci.go:103] Successfully created a docker volume ha-671025-m03
	I0917 00:29:16.467629  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --entrypoint /usr/bin/test -v ha-671025-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:29:16.870641  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m03
	I0917 00:29:16.870686  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.870713  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:29:16.870789  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:29:21.201351  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.33049988s)
	I0917 00:29:21.201386  591333 kic.go:203] duration metric: took 4.330670212s to extract preloaded images to volume ...
	W0917 00:29:21.201499  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:29:21.201529  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:29:21.201570  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:29:21.257059  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m03 --name ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m03 --network ha-671025 --ip 192.168.49.4 --volume ha-671025-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:29:21.526231  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Running}}
	I0917 00:29:21.546352  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.567256  591333 cli_runner.go:164] Run: docker exec ha-671025-m03 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:29:21.619083  591333 oci.go:144] the created container "ha-671025-m03" has a running status.
	I0917 00:29:21.619117  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa...
	I0917 00:29:21.831158  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:29:21.831204  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:29:21.864081  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.886560  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:29:21.886587  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:29:21.939241  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.960815  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:29:21.961005  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:21.982259  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:21.982549  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:21.982571  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:29:22.123516  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.123558  591333 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:29:22.123633  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.143852  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.144070  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.144083  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:29:22.298146  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.298229  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.317607  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.317851  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.317875  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:29:22.455839  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:29:22.455874  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:29:22.455894  591333 ubuntu.go:190] setting up certificates
	I0917 00:29:22.455908  591333 provision.go:84] configureAuth start
	I0917 00:29:22.455983  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:22.474745  591333 provision.go:143] copyHostCerts
	I0917 00:29:22.474791  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474821  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:29:22.474830  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474900  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:29:22.474988  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475015  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:29:22.475028  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475061  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:29:22.475116  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475134  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:29:22.475141  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475164  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:29:22.475216  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:29:22.562518  591333 provision.go:177] copyRemoteCerts
	I0917 00:29:22.562597  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:29:22.562645  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.582491  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:22.681516  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:29:22.681585  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:29:22.711977  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:29:22.712070  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:29:22.739378  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:29:22.739454  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:29:22.767225  591333 provision.go:87] duration metric: took 311.299307ms to configureAuth
	I0917 00:29:22.767254  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:29:22.767513  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:22.767641  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.787106  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.787322  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.787337  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:29:23.027585  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:29:23.027618  591333 machine.go:96] duration metric: took 1.066782115s to provisionDockerMachine
	I0917 00:29:23.027629  591333 client.go:171] duration metric: took 6.619257203s to LocalClient.Create
	I0917 00:29:23.027644  591333 start.go:167] duration metric: took 6.619319411s to libmachine.API.Create "ha-671025"
	I0917 00:29:23.027653  591333 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:29:23.027665  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:29:23.027739  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:29:23.027789  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.048535  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.148623  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:29:23.152295  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:29:23.152333  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:29:23.152344  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:29:23.152354  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:29:23.152402  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:29:23.152478  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:29:23.152577  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:29:23.152589  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:29:23.152698  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:29:23.162366  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:23.192510  591333 start.go:296] duration metric: took 164.839418ms for postStartSetup
	I0917 00:29:23.192875  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.211261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:23.211545  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:29:23.211589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.228367  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.323873  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:29:23.328453  591333 start.go:128] duration metric: took 6.922836798s to createHost
	I0917 00:29:23.328480  591333 start.go:83] releasing machines lock for "ha-671025-m03", held for 6.9229927s
	I0917 00:29:23.328559  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.348699  591333 out.go:179] * Found network options:
	I0917 00:29:23.350091  591333 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:29:23.351262  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351286  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351307  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351319  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:29:23.351413  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:29:23.351457  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.351483  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:29:23.351555  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.370656  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.370963  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.603202  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:29:23.608556  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.632987  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:29:23.633078  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.665413  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:29:23.665445  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:29:23.665479  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:29:23.665582  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:29:23.682513  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:29:23.695198  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:29:23.695265  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:29:23.710235  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:29:23.725450  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:29:23.796030  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:29:23.870255  591333 docker.go:234] disabling docker service ...
	I0917 00:29:23.870317  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:29:23.889003  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:29:23.901613  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:29:23.973987  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:29:24.138099  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:29:24.150712  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:29:24.168641  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:29:24.168702  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.181874  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:29:24.181936  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.193571  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.204646  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.215806  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:29:24.225866  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.236708  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.254758  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.266984  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:29:24.276695  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:29:24.286587  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:24.356850  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:29:24.461065  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:29:24.461156  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:29:24.465833  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:29:24.465903  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:29:24.469817  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:29:24.506319  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:29:24.506419  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.544050  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.583372  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:29:24.584727  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:29:24.586235  591333 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:29:24.587662  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:24.605611  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:29:24.610151  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:24.622865  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:29:24.623090  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:24.623289  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:29:24.641474  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:24.641732  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:29:24.641743  591333 certs.go:194] generating shared ca certs ...
	I0917 00:29:24.641758  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.641894  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:29:24.641944  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:29:24.641954  591333 certs.go:256] generating profile certs ...
	I0917 00:29:24.642025  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:29:24.642065  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:29:24.642081  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:29:24.856212  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 ...
	I0917 00:29:24.856249  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7: {Name:mk65d29cf7ba29b99ab2056d134a2884f928fccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856490  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 ...
	I0917 00:29:24.856512  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7: {Name:mkd89da6d4d9fb3421e5c7677b39452bd32f11a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856628  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:29:24.856803  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:29:24.856940  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:29:24.856957  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:29:24.856970  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:29:24.856984  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:29:24.857022  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:29:24.857038  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:29:24.857051  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:29:24.857063  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:29:24.857073  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:29:24.857137  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:29:24.857169  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:29:24.857179  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:29:24.857203  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:29:24.857236  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:29:24.857259  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:29:24.857298  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:24.857323  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:24.857336  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:29:24.857410  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:29:24.857487  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:24.876681  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:24.965759  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:29:24.970077  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:29:24.983505  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:29:24.987459  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:29:25.001249  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:29:25.005139  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:29:25.019000  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:29:25.023277  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:29:25.037665  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:29:25.041486  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:29:25.056004  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:29:25.060379  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:29:25.075527  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:29:25.103048  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:29:25.130436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:29:25.156335  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:29:25.183962  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0917 00:29:25.210290  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:29:25.237850  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:29:25.264713  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:29:25.292266  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:29:25.322436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:29:25.349159  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:29:25.376714  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:29:25.397066  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:29:25.416141  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:29:25.436031  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:29:25.455195  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:29:25.475694  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:29:25.494981  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:29:25.514182  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:29:25.519757  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:29:25.530366  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534300  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534372  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.541463  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:29:25.551798  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:29:25.562696  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566820  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566898  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.575288  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:29:25.585578  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:29:25.596219  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.599949  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.600000  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.608220  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:29:25.620163  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:29:25.623987  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:29:25.624048  591333 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:29:25.624137  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:29:25.624164  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:29:25.624201  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:29:25.637994  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:29:25.638073  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:29:25.638135  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:29:25.647722  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:29:25.647792  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:29:25.658193  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:29:25.679949  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:29:25.703178  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:29:25.726279  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:29:25.730482  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:25.743251  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:25.813167  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:25.837618  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:25.837905  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:29:25.838070  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:29:25.838130  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:25.859495  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:26.008672  591333 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:26.008736  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0917 00:29:38.691373  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (12.682606276s)
	I0917 00:29:38.691443  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:38.941535  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m03 minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:39.021358  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:39.107652  591333 start.go:319] duration metric: took 13.269740721s to joinCluster
	I0917 00:29:39.107734  591333 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:39.108038  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:39.109032  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:39.110170  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:39.212840  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:39.228175  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:39.228249  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:39.228513  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	W0917 00:29:41.232779  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:43.732906  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:46.232976  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:48.732961  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:51.232362  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	I0917 00:29:51.732347  591333 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:29:51.732379  591333 node_ready.go:38] duration metric: took 12.503848437s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:29:51.732413  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:51.732477  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:51.745126  591333 api_server.go:72] duration metric: took 12.637355364s to wait for apiserver process to appear ...
	I0917 00:29:51.745157  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:51.745182  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:51.751075  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:51.752025  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:51.752049  591333 api_server.go:131] duration metric: took 6.885054ms to wait for apiserver health ...
	I0917 00:29:51.752060  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:51.758905  591333 system_pods.go:59] 24 kube-system pods found
	I0917 00:29:51.758940  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.758949  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.758957  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.758963  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.758968  591333 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.758973  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.758978  591333 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.758990  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.758995  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.759000  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.759004  591333 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.759009  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.759018  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.759023  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.759027  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.759035  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.759039  591333 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.759049  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.759054  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.759058  591333 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.759066  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.759070  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.759075  591333 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.759079  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.759086  591333 system_pods.go:74] duration metric: took 7.019861ms to wait for pod list to return data ...
	I0917 00:29:51.759106  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:51.761820  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:51.761841  591333 default_sa.go:55] duration metric: took 2.726063ms for default service account to be created ...
	I0917 00:29:51.761850  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:51.766999  591333 system_pods.go:86] 24 kube-system pods found
	I0917 00:29:51.767031  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.767037  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.767041  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.767044  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.767047  591333 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.767050  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.767053  591333 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.767057  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.767060  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.767062  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.767066  591333 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.767069  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.767072  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.767075  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.767078  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.767081  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.767084  591333 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.767087  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.767089  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.767093  591333 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.767095  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.767099  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.767105  591333 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.767108  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.767115  591333 system_pods.go:126] duration metric: took 5.259145ms to wait for k8s-apps to be running ...
	I0917 00:29:51.767125  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:51.767173  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:51.780761  591333 system_svc.go:56] duration metric: took 13.623242ms WaitForService to wait for kubelet
	I0917 00:29:51.780795  591333 kubeadm.go:578] duration metric: took 12.673026165s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:51.780819  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:51.783987  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784014  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784059  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784065  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784075  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784081  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784090  591333 node_conditions.go:105] duration metric: took 3.264516ms to run NodePressure ...
	I0917 00:29:51.784106  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:51.784138  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:51.784529  591333 ssh_runner.go:195] Run: rm -f paused
	I0917 00:29:51.788748  591333 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:51.789284  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:29:51.792784  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.797966  591333 pod_ready.go:94] pod "coredns-66bc5c9577-mqh24" is "Ready"
	I0917 00:29:51.797991  591333 pod_ready.go:86] duration metric: took 5.183268ms for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.798004  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.802611  591333 pod_ready.go:94] pod "coredns-66bc5c9577-vfj56" is "Ready"
	I0917 00:29:51.802634  591333 pod_ready.go:86] duration metric: took 4.623535ms for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.805006  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809379  591333 pod_ready.go:94] pod "etcd-ha-671025" is "Ready"
	I0917 00:29:51.809416  591333 pod_ready.go:86] duration metric: took 4.389649ms for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809427  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813691  591333 pod_ready.go:94] pod "etcd-ha-671025-m02" is "Ready"
	I0917 00:29:51.813712  591333 pod_ready.go:86] duration metric: took 4.279249ms for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813720  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.990174  591333 request.go:683] "Waited before sending request" delay="176.338354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671025-m03"
	I0917 00:29:52.190229  591333 request.go:683] "Waited before sending request" delay="196.333995ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:52.193665  591333 pod_ready.go:94] pod "etcd-ha-671025-m03" is "Ready"
	I0917 00:29:52.193693  591333 pod_ready.go:86] duration metric: took 379.968155ms for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.390210  591333 request.go:683] "Waited before sending request" delay="196.377999ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0917 00:29:52.394451  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.590608  591333 request.go:683] "Waited before sending request" delay="196.01886ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025"
	I0917 00:29:52.790098  591333 request.go:683] "Waited before sending request" delay="196.369455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:52.793544  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025" is "Ready"
	I0917 00:29:52.793578  591333 pod_ready.go:86] duration metric: took 399.098458ms for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.793595  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.990070  591333 request.go:683] "Waited before sending request" delay="196.355614ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m02"
	I0917 00:29:53.190086  591333 request.go:683] "Waited before sending request" delay="196.360413ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:53.193284  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m02" is "Ready"
	I0917 00:29:53.193311  591333 pod_ready.go:86] duration metric: took 399.708595ms for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.193320  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.390584  591333 request.go:683] "Waited before sending request" delay="197.147317ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m03"
	I0917 00:29:53.590103  591333 request.go:683] "Waited before sending request" delay="196.290111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:53.593362  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m03" is "Ready"
	I0917 00:29:53.593412  591333 pod_ready.go:86] duration metric: took 400.084881ms for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.790733  591333 request.go:683] "Waited before sending request" delay="197.180718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0917 00:29:53.794548  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.989879  591333 request.go:683] "Waited before sending request" delay="195.193469ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025"
	I0917 00:29:54.190518  591333 request.go:683] "Waited before sending request" delay="197.369336ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:54.194152  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025" is "Ready"
	I0917 00:29:54.194183  591333 pod_ready.go:86] duration metric: took 399.607782ms for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.194195  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.390598  591333 request.go:683] "Waited before sending request" delay="196.290873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m02"
	I0917 00:29:54.590577  591333 request.go:683] "Waited before sending request" delay="196.311056ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:54.594360  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m02" is "Ready"
	I0917 00:29:54.594432  591333 pod_ready.go:86] duration metric: took 400.227353ms for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.594445  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.789830  591333 request.go:683] "Waited before sending request" delay="195.263054ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m03"
	I0917 00:29:54.990466  591333 request.go:683] "Waited before sending request" delay="197.342033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:54.993759  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m03" is "Ready"
	I0917 00:29:54.993788  591333 pod_ready.go:86] duration metric: took 399.335381ms for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.190138  591333 request.go:683] "Waited before sending request" delay="196.195607ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0917 00:29:55.194060  591333 pod_ready.go:83] waiting for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.390543  591333 request.go:683] "Waited before sending request" delay="196.36227ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4k8lz"
	I0917 00:29:55.590492  591333 request.go:683] "Waited before sending request" delay="196.425967ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:55.593719  591333 pod_ready.go:94] pod "kube-proxy-4k8lz" is "Ready"
	I0917 00:29:55.593746  591333 pod_ready.go:86] duration metric: took 399.654072ms for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.593753  591333 pod_ready.go:83] waiting for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.790222  591333 request.go:683] "Waited before sending request" delay="196.381687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f58dt"
	I0917 00:29:55.990078  591333 request.go:683] "Waited before sending request" delay="196.35386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:55.993537  591333 pod_ready.go:94] pod "kube-proxy-f58dt" is "Ready"
	I0917 00:29:55.993565  591333 pod_ready.go:86] duration metric: took 399.806033ms for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.993573  591333 pod_ready.go:83] waiting for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.190000  591333 request.go:683] "Waited before sending request" delay="196.348448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q96zd"
	I0917 00:29:56.390582  591333 request.go:683] "Waited before sending request" delay="197.229029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:56.393563  591333 pod_ready.go:94] pod "kube-proxy-q96zd" is "Ready"
	I0917 00:29:56.393592  591333 pod_ready.go:86] duration metric: took 400.012384ms for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.590057  591333 request.go:683] "Waited before sending request" delay="196.329973ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0917 00:29:56.593914  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.790433  591333 request.go:683] "Waited before sending request" delay="196.375831ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025"
	I0917 00:29:56.990073  591333 request.go:683] "Waited before sending request" delay="196.373603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:56.993259  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025" is "Ready"
	I0917 00:29:56.993288  591333 pod_ready.go:86] duration metric: took 399.350969ms for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.993297  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.190549  591333 request.go:683] "Waited before sending request" delay="197.173424ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m02"
	I0917 00:29:57.390069  591333 request.go:683] "Waited before sending request" delay="196.377477ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:57.393214  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m02" is "Ready"
	I0917 00:29:57.393243  591333 pod_ready.go:86] duration metric: took 399.939467ms for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.393254  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.590599  591333 request.go:683] "Waited before sending request" delay="197.214476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m03"
	I0917 00:29:57.790207  591333 request.go:683] "Waited before sending request" delay="196.332231ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:57.793613  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m03" is "Ready"
	I0917 00:29:57.793646  591333 pod_ready.go:86] duration metric: took 400.384119ms for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.793660  591333 pod_ready.go:40] duration metric: took 6.00487949s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:57.841958  591333 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:29:57.843747  591333 out.go:179] * Done! kubectl is now configured to use "ha-671025" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.206543981Z" level=info msg="Starting container: 1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e" id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.215619295Z" level=info msg="Started container" PID=2320 containerID=1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e description=kube-system/coredns-66bc5c9577-vfj56/coredns id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39dc71832b8bb399ba20ce48f2427629524276766208427b4f7705d2c0d5a7bc
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112704664Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112791033Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130623397Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130669888Z" level=info msg="Adding pod default_busybox-7b57f96db7-wj4r5 to CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142401777Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142574298Z" level=info msg="Checking pod default_busybox-7b57f96db7-wj4r5 for CNI network kindnet (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.143612429Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.144813443Z" level=info msg="Ran pod sandbox 6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f with infra container: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146339053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146578417Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.147237951Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.148635276Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.991719699Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.350447433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.351203929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.352357885Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.353373442Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.354669415Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.358933450Z" level=info msg="Creating container: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.359053527Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.435258478Z" level=info msg="Created container 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.436586730Z" level=info msg="Starting container: 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a" id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.446220694Z" level=info msg="Started container" PID=2585 containerID=7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a description=default/busybox-7b57f96db7-wj4r5/busybox id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7f97d1a1e175b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   33 seconds ago       Running             busybox                   0                   6347f27b59723       busybox-7b57f96db7-wj4r5
	1b2322cca7366       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   0                   39dc71832b8bb       coredns-66bc5c9577-vfj56
	2f150c7f7dc18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   f228c8ac21369       storage-provisioner
	4fd73d6446292       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   0                   92ca6f4389168       coredns-66bc5c9577-mqh24
	97d03ed4f05c2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago        Running             kindnet-cni               0                   ad7fd40f66a01       kindnet-9zvhz
	beeb8e61abad9       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      2 minutes ago        Running             kube-proxy                0                   527193be2b767       kube-proxy-f58dt
	ecb56d4cc4c88       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago        Running             kube-vip                  0                   852e4beaeede7       kube-vip-ha-671025
	7a41c39db49f4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      2 minutes ago        Running             kube-scheduler            0                   2a00cabb8a637       kube-scheduler-ha-671025
	d4e775bc05e92       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      2 minutes ago        Running             kube-apiserver            0                   e909c5565b688       kube-apiserver-ha-671025
	b966a80c48716       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      2 minutes ago        Running             kube-controller-manager   0                   9e2f63f3286f1       kube-controller-manager-ha-671025
	7819068a50e98       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      2 minutes ago        Running             etcd                      0                   985f7f1c3407d       etcd-ha-671025
	
	
	==> coredns [1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e] <==
	[INFO] 10.244.0.4:52527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231229s
	[INFO] 10.244.0.4:39416 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.0015558s
	[INFO] 10.244.0.4:45468 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000706318s
	[INFO] 10.244.0.4:53485 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000087472s
	[INFO] 10.244.1.2:37939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156622s
	[INFO] 10.244.1.2:47463 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000147027s
	[INFO] 10.244.2.2:34151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011555178s
	[INFO] 10.244.2.2:39096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.081855349s
	[INFO] 10.244.2.2:40937 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241541s
	[INFO] 10.244.0.4:56066 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205334s
	[INFO] 10.244.0.4:52703 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134531s
	[INFO] 10.244.0.4:56844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000105782s
	[INFO] 10.244.0.4:52436 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144945s
	[INFO] 10.244.1.2:42520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154899s
	[INFO] 10.244.1.2:36438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196498s
	[INFO] 10.244.2.2:42902 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170395s
	[INFO] 10.244.2.2:44897 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143905s
	[INFO] 10.244.0.4:59616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105243s
	[INFO] 10.244.1.2:39631 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002321s
	[INFO] 10.244.1.2:59007 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009976s
	[INFO] 10.244.2.2:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000146002s
	[INFO] 10.244.2.2:56762 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164207s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145402s
	[INFO] 10.244.0.4:37880 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097925s
	[INFO] 10.244.1.2:55010 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144896s
	
	
	==> coredns [4fd73d6446292f190b136d89cd25bf39fce256818f5056f6d2665d5e4fa5ebd5] <==
	[INFO] 10.244.2.2:37478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001401s
	[INFO] 10.244.0.4:32873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013759s
	[INFO] 10.244.0.4:37452 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006758446s
	[INFO] 10.244.0.4:53096 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156627s
	[INFO] 10.244.0.4:33933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125115s
	[INFO] 10.244.1.2:46463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000282565s
	[INFO] 10.244.1.2:39686 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00021884s
	[INFO] 10.244.1.2:54348 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01683783s
	[INFO] 10.244.1.2:54156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247643s
	[INFO] 10.244.1.2:51012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248315s
	[INFO] 10.244.1.2:49586 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095306s
	[INFO] 10.244.2.2:42847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150928s
	[INFO] 10.244.2.2:38291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000461737s
	[INFO] 10.244.0.4:57992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127693s
	[INFO] 10.244.0.4:53956 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219562s
	[INFO] 10.244.0.4:34480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117878s
	[INFO] 10.244.1.2:37372 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177692s
	[INFO] 10.244.1.2:44790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000227814s
	[INFO] 10.244.2.2:55057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193926s
	[INFO] 10.244.2.2:51005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158043s
	[INFO] 10.244.0.4:57976 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144447s
	[INFO] 10.244.0.4:45233 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113362s
	[INFO] 10.244.1.2:59399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116822s
	[INFO] 10.244.1.2:55814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105565s
	[INFO] 10.244.1.2:33844 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129758s
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:30:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf085e2718b148b5ad91c414953b197e
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m5s
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m5s
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m11s
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     2m15s (x8 over 2m15s)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s                  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s                  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s                  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                114s                   kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           96s                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           59s                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:30:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d9e6a6baf694e3db7d6670efecf289a
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         92s
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      94s
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        89s   kube-proxy       
	  Normal  RegisteredNode  91s   node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode  91s   node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode  59s   node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	Name:               ha-671025-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:30:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:09 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:09 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:09 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:09 +0000   Wed, 17 Sep 2025 00:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-671025-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 660e9daa5dff498295dc0311dee374a4
	  System UUID:                ca019c4e-efee-45a1-854b-8ad90ea7fdf4
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dk9cf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 etcd-ha-671025-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         55s
	  kube-system                 kindnet-9w6f7                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-ha-671025-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-controller-manager-ha-671025-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-proxy-q96zd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-ha-671025-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-vip-ha-671025-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        54s   kube-proxy       
	  Normal  RegisteredNode  56s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  56s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [7819068a50e981a28f7aac6e0ffa00b30498aa7a8728f90c252a1dde8a63172c] <==
	{"level":"info","ts":"2025-09-17T00:29:31.065556Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:29:31.065590Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:29:31.067668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:29:31.084476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:29:31.100788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45628","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:29:38.662149Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:29:42.031334Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:29:58.835058Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:29:58.991840Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:30:01.018301Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"58f1161d61ce118","bytes":1446419,"size":"1.4 MB","took":"30.017682684s"}
	{"level":"info","ts":"2025-09-17T00:30:09.501879Z","caller":"traceutil/trace.go:172","msg":"trace[2146072419] linearizableReadLoop","detail":"{readStateIndex:1188; appliedIndex:1188; }","duration":"141.203793ms","start":"2025-09-17T00:30:09.360647Z","end":"2025-09-17T00:30:09.501850Z","steps":["trace[2146072419] 'read index received'  (duration: 141.195963ms)","trace[2146072419] 'applied index is now lower than readState.Index'  (duration: 5.958µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:30:09.505268Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.27894ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040018158788372 > lease_revoke:<id:70cc995512839e0c>","response":"size:29"}
	{"level":"warn","ts":"2025-09-17T00:30:09.505347Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.683214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:09.505532Z","caller":"traceutil/trace.go:172","msg":"trace[500820100] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"139.040911ms","start":"2025-09-17T00:30:09.366470Z","end":"2025-09-17T00:30:09.505511Z","steps":["trace[500820100] 'process raft request'  (duration: 138.89516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:09.505551Z","caller":"traceutil/trace.go:172","msg":"trace[1619350159] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1004; }","duration":"144.895328ms","start":"2025-09-17T00:30:09.360635Z","end":"2025-09-17T00:30:09.505530Z","steps":["trace[1619350159] 'agreement among raft nodes before linearized reading'  (duration: 141.300792ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:30:09.778515Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.407706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:09.778612Z","caller":"traceutil/trace.go:172","msg":"trace[1181430234] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1005; }","duration":"170.522946ms","start":"2025-09-17T00:30:09.608073Z","end":"2025-09-17T00:30:09.778596Z","steps":["trace[1181430234] 'range keys from in-memory index tree'  (duration: 169.782684ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:26.742546Z","caller":"traceutil/trace.go:172","msg":"trace[1301104523] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1240; }","duration":"134.800942ms","start":"2025-09-17T00:30:26.607715Z","end":"2025-09-17T00:30:26.742516Z","steps":["trace[1301104523] 'read index received'  (duration: 134.794574ms)","trace[1301104523] 'applied index is now lower than readState.Index'  (duration: 5.057µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:30:26.742702Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.951869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:26.742764Z","caller":"traceutil/trace.go:172","msg":"trace[559742275] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1045; }","duration":"135.049537ms","start":"2025-09-17T00:30:26.607704Z","end":"2025-09-17T00:30:26.742754Z","steps":["trace[559742275] 'agreement among raft nodes before linearized reading'  (duration: 134.912912ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:26.742748Z","caller":"traceutil/trace.go:172","msg":"trace[1407010545] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"138.186392ms","start":"2025-09-17T00:30:26.604547Z","end":"2025-09-17T00:30:26.742734Z","steps":["trace[1407010545] 'process raft request'  (duration: 138.044509ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:30:27.284481Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"b65d66e84a12b94b","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"23.876704ms"}
	{"level":"warn","ts":"2025-09-17T00:30:27.284588Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"58f1161d61ce118","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"23.977845ms"}
	{"level":"info","ts":"2025-09-17T00:30:27.284875Z","caller":"traceutil/trace.go:172","msg":"trace[1317115850] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"128.236157ms","start":"2025-09-17T00:30:27.156624Z","end":"2025-09-17T00:30:27.284860Z","steps":["trace[1317115850] 'process raft request'  (duration: 128.097873ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:27.895598Z","caller":"traceutil/trace.go:172","msg":"trace[11920158] transaction","detail":"{read_only:false; response_revision:1050; number_of_response:1; }","duration":"148.026679ms","start":"2025-09-17T00:30:27.747545Z","end":"2025-09-17T00:30:27.895572Z","steps":["trace[11920158] 'process raft request'  (duration: 101.895012ms)","trace[11920158] 'compare'  (duration: 45.996426ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:30:35 up  3:12,  0 users,  load average: 0.78, 0.46, 5.18
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [97d03ed4f05c2c8a7edb2014248bdbf3d9cfbee7da82980f69fec92e92471166] <==
	I0917 00:29:51.204752       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:01.203411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:01.203462       1 main.go:301] handling current node
	I0917 00:30:01.203482       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:01.203490       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:01.203701       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:01.203714       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:11.204491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:11.204552       1 main.go:301] handling current node
	I0917 00:30:11.204574       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:11.204583       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:11.204798       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:11.204810       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:21.212489       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:21.212536       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:21.212827       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:21.212840       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:21.212973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:21.212983       1 main.go:301] handling current node
	I0917 00:30:31.203606       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:31.203652       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:31.203966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:31.203990       1 main.go:301] handling current node
	I0917 00:30:31.204009       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:31.204015       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [d4e775bc05e92406988cf96c77fa7e581cfe8cc2f3f70e1efc89c2ec23a63e4a] <==
	I0917 00:28:24.325254       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:28:24.746459       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:28:24.756910       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0917 00:28:24.764710       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:28:29.928906       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:29.932824       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:30.328091       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0917 00:28:30.429040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:29:34.977143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:29:44.951924       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:30:02.333807       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45142: use of closed network connection
	E0917 00:30:02.515957       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45160: use of closed network connection
	E0917 00:30:02.696738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45172: use of closed network connection
	E0917 00:30:02.975357       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45188: use of closed network connection
	E0917 00:30:03.163201       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45206: use of closed network connection
	E0917 00:30:03.360510       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45214: use of closed network connection
	E0917 00:30:03.537260       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45238: use of closed network connection
	E0917 00:30:03.723220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45262: use of closed network connection
	E0917 00:30:03.899588       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45288: use of closed network connection
	E0917 00:30:04.199638       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45314: use of closed network connection
	E0917 00:30:04.375427       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45330: use of closed network connection
	E0917 00:30:04.546665       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45360: use of closed network connection
	E0917 00:30:04.718966       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45380: use of closed network connection
	E0917 00:30:04.893333       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45402: use of closed network connection
	E0917 00:30:05.069202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45414: use of closed network connection
	
	
	==> kube-controller-manager [b966a80c487167a8ef5e8ce7981e5a50b500e5d8ce6a71e00ed74b342da31465] <==
	I0917 00:28:29.324302       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:28:29.324327       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:28:29.324356       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:28:29.325297       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0917 00:28:29.325324       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:28:29.325364       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0917 00:28:29.325335       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:28:29.325427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:28:29.326766       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:28:29.333261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:28:29.333638       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:29.333657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:28:29.333665       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:28:29.340961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:28:29.343294       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:28:29.353739       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:44.313285       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0917 00:29:00.309163       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-g7wk8 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-g7wk8\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:00.997925       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m02\" does not exist"
	I0917 00:29:01.017089       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m02" podCIDRs=["10.244.1.0/24"]
	I0917 00:29:04.315749       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m02"
	E0917 00:29:37.100559       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-4vrlk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-4vrlk\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:38.581695       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m03\" does not exist"
	I0917 00:29:38.589924       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m03" podCIDRs=["10.244.2.0/24"]
	I0917 00:29:39.436557       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m03"
	
	
	==> kube-proxy [beeb8e61abad9cff9c53d8b6d7bd473fa1b23bbe18bf4739d34ffc8956376ff2] <==
	I0917 00:28:30.830323       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:28:30.891652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:28:30.992026       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:28:30.992089       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:28:30.992227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:28:31.013108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:28:31.013179       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:28:31.018687       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:28:31.019218       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:28:31.019253       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:28:31.020737       1 config.go:200] "Starting service config controller"
	I0917 00:28:31.020764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:28:31.020800       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:28:31.020809       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:28:31.020897       1 config.go:309] "Starting node config controller"
	I0917 00:28:31.020964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:28:31.021001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:28:31.021018       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:28:31.021055       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:28:31.121005       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:28:31.121031       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:28:31.121168       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a41c39db49f45380d579839f82d520984625d29f4dabaef0381390e6bdf676a] <==
	E0917 00:28:22.635845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:22.635883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:28:22.635646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:28:22.635968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:22.636038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:28:22.636058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:22.636404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:22.636428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:28:22.636582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:28:22.636623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:28:22.636965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:28:23.460819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:23.509027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:23.580561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:23.582654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:23.693685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0917 00:28:26.831507       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:29:01.061353       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:01.061564       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 138da6b8-9faf-407f-8647-78ecb92029f1(kube-system/kindnet-t9sbk) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	E0917 00:29:01.061607       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	I0917 00:29:01.062825       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:38.625075       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	E0917 00:29:38.625173       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 9fe8a312-c296-4c84-9c30-5e578c24e82e(kube-system/kube-proxy-q96zd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	E0917 00:29:38.625194       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	I0917 00:29:38.626798       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	
	
	==> kubelet <==
	Sep 17 00:28:44 ha-671025 kubelet[1668]: E0917 00:28:44.581551    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068924581205544  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:28:54 ha-671025 kubelet[1668]: E0917 00:28:54.582755    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068934582486457  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:28:54 ha-671025 kubelet[1668]: E0917 00:28:54.582788    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068934582486457  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:04 ha-671025 kubelet[1668]: E0917 00:29:04.584007    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068944583759061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:04 ha-671025 kubelet[1668]: E0917 00:29:04.584046    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068944583759061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:14 ha-671025 kubelet[1668]: E0917 00:29:14.585159    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068954584899808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:14 ha-671025 kubelet[1668]: E0917 00:29:14.585207    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068954584899808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:24 ha-671025 kubelet[1668]: E0917 00:29:24.586593    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068964586327984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:24 ha-671025 kubelet[1668]: E0917 00:29:24.586624    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068964586327984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:34 ha-671025 kubelet[1668]: E0917 00:29:34.587985    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068974587766323  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:34 ha-671025 kubelet[1668]: E0917 00:29:34.588046    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068974587766323  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:44 ha-671025 kubelet[1668]: E0917 00:29:44.589297    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068984589063590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:44 ha-671025 kubelet[1668]: E0917 00:29:44.589343    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068984589063590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:54 ha-671025 kubelet[1668]: E0917 00:29:54.592592    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068994591703153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:54 ha-671025 kubelet[1668]: E0917 00:29:54.592634    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068994591703153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:58 ha-671025 kubelet[1668]: I0917 00:29:58.902373    1668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7vc\" (UniqueName: \"kubernetes.io/projected/90adda6e-a8af-41fd-880e-3820a76c660d-kube-api-access-2n7vc\") pod \"busybox-7b57f96db7-wj4r5\" (UID: \"90adda6e-a8af-41fd-880e-3820a76c660d\") " pod="default/busybox-7b57f96db7-wj4r5"
	Sep 17 00:30:02 ha-671025 kubelet[1668]: E0917 00:30:02.515952    1668 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41316->127.0.0.1:37239: write tcp 127.0.0.1:41316->127.0.0.1:37239: write: broken pipe
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594113    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594155    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595504    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595637    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597161    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597200    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598240    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598284    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (30.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --output json --alsologtostderr -v 5: exit status 7 (751.704594ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-671025","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-671025-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-671025-m03","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-671025-m04","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:30:36.941809  604688 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:30:36.942140  604688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:30:36.942151  604688 out.go:374] Setting ErrFile to fd 2...
	I0917 00:30:36.942157  604688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:30:36.942379  604688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:30:36.942610  604688 out.go:368] Setting JSON to true
	I0917 00:30:36.942637  604688 mustload.go:65] Loading cluster: ha-671025
	I0917 00:30:36.942751  604688 notify.go:220] Checking for updates...
	I0917 00:30:36.943086  604688 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:30:36.943122  604688 status.go:174] checking status of ha-671025 ...
	I0917 00:30:36.943602  604688 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:30:36.964520  604688 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:30:36.964557  604688 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:30:36.964924  604688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:30:36.983085  604688 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:30:36.983340  604688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:30:36.983379  604688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:30:37.001965  604688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:30:37.097519  604688 ssh_runner.go:195] Run: systemctl --version
	I0917 00:30:37.102595  604688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:30:37.115480  604688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:30:37.176722  604688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:30:37.165651073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:30:37.177363  604688 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:30:37.177426  604688 api_server.go:166] Checking apiserver status ...
	I0917 00:30:37.177465  604688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:30:37.190438  604688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:30:37.201199  604688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:30:37.201285  604688 ssh_runner.go:195] Run: ls
	I0917 00:30:37.205600  604688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:30:37.209989  604688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:30:37.210021  604688 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:30:37.210033  604688 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:30:37.210060  604688 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:30:37.210308  604688 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:30:37.229587  604688 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:30:37.229614  604688 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:30:37.229863  604688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:30:37.247795  604688 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:30:37.248067  604688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:30:37.248125  604688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:30:37.267458  604688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:30:37.362933  604688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:30:37.377078  604688 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:30:37.377107  604688 api_server.go:166] Checking apiserver status ...
	I0917 00:30:37.377137  604688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:30:37.390861  604688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup
	W0917 00:30:37.402570  604688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:30:37.402630  604688 ssh_runner.go:195] Run: ls
	I0917 00:30:37.406892  604688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:30:37.411553  604688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:30:37.411583  604688 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:30:37.411594  604688 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:30:37.411610  604688 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:30:37.411878  604688 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:30:37.431633  604688 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:30:37.431661  604688 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:30:37.431957  604688 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:30:37.452177  604688 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:30:37.452545  604688 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:30:37.452610  604688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:30:37.473852  604688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:30:37.569980  604688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:30:37.584707  604688 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:30:37.584735  604688 api_server.go:166] Checking apiserver status ...
	I0917 00:30:37.584767  604688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:30:37.598117  604688 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:30:37.610011  604688 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:30:37.610077  604688 ssh_runner.go:195] Run: ls
	I0917 00:30:37.614371  604688 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:30:37.619011  604688 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:30:37.619042  604688 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:30:37.619055  604688 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:30:37.619082  604688 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:30:37.619469  604688 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:30:37.638909  604688 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:30:37.638932  604688 status.go:384] host is not running, skipping remaining checks
	I0917 00:30:37.638939  604688 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp testdata/cp-test.txt ha-671025:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025_ha-671025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test_ha-671025_ha-671025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025_ha-671025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test_ha-671025_ha-671025-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025_ha-671025-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp ha-671025:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025_ha-671025-m04.txt: exit status 1 (155.981747ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp ha-671025:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025_ha-671025-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test_ha-671025_ha-671025-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test_ha-671025_ha-671025-m04.txt": exit status 1 (153.471242ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test_ha-671025_ha-671025-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp testdata/cp-test.txt ha-671025-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m02:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m02_ha-671025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test_ha-671025-m02_ha-671025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m02:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m02_ha-671025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test_ha-671025-m02_ha-671025-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m02:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m02_ha-671025-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m02:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m02_ha-671025-m04.txt: exit status 1 (147.179242ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m02:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m02_ha-671025-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test_ha-671025-m02_ha-671025-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test_ha-671025-m02_ha-671025-m04.txt": exit status 1 (158.25368ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test_ha-671025-m02_ha-671025-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp testdata/cp-test.txt ha-671025-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m03_ha-671025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt: exit status 1 (152.583443ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt": exit status 1 (149.139713ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt: exit status 1 (148.221352ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (148.23562ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt: exit status 1 (149.294784ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (146.437184ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:545: failed to read test file 'testdata/cp-test.txt' : open /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt: no such file or directory
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt: exit status 1 (170.664558ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (154.276465ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 "sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt": exit status 1 (274.635413ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-671025-m04_ha-671025.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025 \"sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-671025-m04_ha-671025.txt: No such file or directory\r\n",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt: exit status 1 (171.186674ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (147.609171ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 "sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt": exit status 1 (273.550634ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m02 \"sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt: No such file or directory\r\n",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt: exit status 1 (175.712903ms)

                                                
                                                
** stderr ** 
	getting host: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (150.433072ms)

                                                
                                                
** stderr ** 
	ssh: "ha-671025-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 "sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt": exit status 1 (271.292216ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-671025 ssh -n ha-671025-m03 \"sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt: No such file or directory\r\n",
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 591894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:28:07.642349633Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2947b2c900e461fedf4c1b14afccf677c0bbbd5856a737563908fb819f368e69",
	            "SandboxKey": "/var/run/docker/netns/2947b2c900e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:4e:63:a1:43:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "e04f7d855de79c251547e2cb959967e0ee3cd816f6030c7dc40e9731e31f953c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.254722451s)
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m03.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m03_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt                                                            │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:28:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:28:02.421105  591333 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:28:02.421342  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421350  591333 out.go:374] Setting ErrFile to fd 2...
	I0917 00:28:02.421355  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421569  591333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:28:02.422069  591333 out.go:368] Setting JSON to false
	I0917 00:28:02.422989  591333 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11425,"bootTime":1758057457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:28:02.423098  591333 start.go:140] virtualization: kvm guest
	I0917 00:28:02.425200  591333 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:28:02.426666  591333 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:28:02.426650  591333 notify.go:220] Checking for updates...
	I0917 00:28:02.429221  591333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:28:02.430609  591333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:02.431832  591333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:28:02.433241  591333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:28:02.434707  591333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:28:02.436048  591333 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:28:02.460585  591333 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:28:02.460765  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.517630  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.506821705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.517750  591333 docker.go:318] overlay module found
	I0917 00:28:02.519568  591333 out.go:179] * Using the docker driver based on user configuration
	I0917 00:28:02.520915  591333 start.go:304] selected driver: docker
	I0917 00:28:02.520935  591333 start.go:918] validating driver "docker" against <nil>
	I0917 00:28:02.520951  591333 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:28:02.521682  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.578543  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.56897484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.578724  591333 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:28:02.578937  591333 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:28:02.580907  591333 out.go:179] * Using Docker driver with root privileges
	I0917 00:28:02.582377  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:02.582477  591333 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 00:28:02.582493  591333 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 00:28:02.582574  591333 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:02.583947  591333 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:28:02.585129  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:02.586454  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:02.587786  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:02.587830  591333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:28:02.587838  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:02.587843  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:02.587944  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:02.587958  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:02.588350  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:02.588379  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json: {Name:mk091aa75e831ff22299b49a9817446c9f212399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:02.609265  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:02.609287  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:02.609305  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:02.609329  591333 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:02.609454  591333 start.go:364] duration metric: took 102.584µs to acquireMachinesLock for "ha-671025"
	I0917 00:28:02.609482  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:02.609540  591333 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:28:02.611610  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:02.611847  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:02.611880  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:02.611969  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:02.612007  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612019  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612089  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:02.612110  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612122  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612504  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:28:02.630138  591333 cli_runner.go:211] docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:28:02.630214  591333 network_create.go:284] running [docker network inspect ha-671025] to gather additional debugging logs...
	I0917 00:28:02.630235  591333 cli_runner.go:164] Run: docker network inspect ha-671025
	W0917 00:28:02.647610  591333 cli_runner.go:211] docker network inspect ha-671025 returned with exit code 1
	I0917 00:28:02.647648  591333 network_create.go:287] error running [docker network inspect ha-671025]: docker network inspect ha-671025: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-671025 not found
	I0917 00:28:02.647665  591333 network_create.go:289] output of [docker network inspect ha-671025]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-671025 not found
	
	** /stderr **
	I0917 00:28:02.647783  591333 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:02.666874  591333 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014926f0}
	I0917 00:28:02.666937  591333 network_create.go:124] attempt to create docker network ha-671025 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 00:28:02.666993  591333 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-671025 ha-671025
	I0917 00:28:02.726570  591333 network_create.go:108] docker network ha-671025 192.168.49.0/24 created
	I0917 00:28:02.726603  591333 kic.go:121] calculated static IP "192.168.49.2" for the "ha-671025" container
	I0917 00:28:02.726684  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:02.744335  591333 cli_runner.go:164] Run: docker volume create ha-671025 --label name.minikube.sigs.k8s.io=ha-671025 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:02.765618  591333 oci.go:103] Successfully created a docker volume ha-671025
	I0917 00:28:02.765710  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --entrypoint /usr/bin/test -v ha-671025:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:03.152134  591333 oci.go:107] Successfully prepared a docker volume ha-671025
	I0917 00:28:03.152201  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:03.152229  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:03.152307  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:07.519336  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.366963199s)
	I0917 00:28:07.519373  591333 kic.go:203] duration metric: took 4.3671415s to extract preloaded images to volume ...
	W0917 00:28:07.519497  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:07.519557  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:07.519606  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:07.583258  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025 --name ha-671025 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025 --network ha-671025 --ip 192.168.49.2 --volume ha-671025:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:07.861983  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Running}}
	I0917 00:28:07.881740  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:07.902486  591333 cli_runner.go:164] Run: docker exec ha-671025 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:07.957445  591333 oci.go:144] the created container "ha-671025" has a running status.
	I0917 00:28:07.957491  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa...
	I0917 00:28:07.970221  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:07.970277  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:07.996810  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.018618  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:08.018648  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:08.065859  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.088307  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:08.088464  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:08.112791  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:08.113142  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:08.113159  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:08.114236  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41092->127.0.0.1:33148: read: connection reset by peer
	I0917 00:28:11.250841  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.250869  591333 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:28:11.250946  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.270326  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.270573  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.270589  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:28:11.422194  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.422282  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.441086  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.441373  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.441412  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:11.579534  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:11.579570  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:11.579606  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:11.579621  591333 provision.go:84] configureAuth start
	I0917 00:28:11.579696  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:11.598338  591333 provision.go:143] copyHostCerts
	I0917 00:28:11.598381  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598438  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:11.598450  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598528  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:11.598637  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598660  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:11.598668  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598709  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:11.598793  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598818  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:11.598827  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598863  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:11.598936  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:28:11.692056  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:11.692126  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:11.692177  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.710836  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:11.809661  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:11.809738  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:11.838472  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:11.838547  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:28:11.864972  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:11.865064  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:11.892502  591333 provision.go:87] duration metric: took 312.863604ms to configureAuth
	I0917 00:28:11.892539  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:11.892749  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:11.892876  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.911894  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.912108  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.912123  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:12.156893  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:12.156918  591333 machine.go:96] duration metric: took 4.068577091s to provisionDockerMachine
	I0917 00:28:12.156929  591333 client.go:171] duration metric: took 9.545042483s to LocalClient.Create
	I0917 00:28:12.156950  591333 start.go:167] duration metric: took 9.54510971s to libmachine.API.Create "ha-671025"
	I0917 00:28:12.156957  591333 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:28:12.156965  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:12.157043  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:12.157079  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.175648  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.275414  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:12.279194  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:12.279224  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:12.279231  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:12.279238  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:12.279255  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:12.279317  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:12.279416  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:12.279430  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:12.279530  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:12.288873  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:12.317418  591333 start.go:296] duration metric: took 160.444141ms for postStartSetup
	I0917 00:28:12.317811  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.336261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:12.336565  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:12.336607  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.354705  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.446983  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:12.451593  591333 start.go:128] duration metric: took 9.842036225s to createHost
	I0917 00:28:12.451634  591333 start.go:83] releasing machines lock for "ha-671025", held for 9.842165682s
	I0917 00:28:12.451714  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.469798  591333 ssh_runner.go:195] Run: cat /version.json
	I0917 00:28:12.469852  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.469869  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:12.469931  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.489508  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.489501  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.581676  591333 ssh_runner.go:195] Run: systemctl --version
	I0917 00:28:12.654927  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:12.796661  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:12.802016  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.827191  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:12.827278  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.858197  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:12.858222  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:12.858256  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:12.858306  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:12.874462  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:12.887158  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:12.887226  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:12.902417  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:12.917174  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:12.986628  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:13.060583  591333 docker.go:234] disabling docker service ...
	I0917 00:28:13.060656  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:13.081466  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:13.094012  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:13.164943  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:13.315404  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:13.328708  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:13.347694  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:13.347757  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.361221  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:13.361294  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.371972  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.382985  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.394505  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:13.405096  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.416205  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.434282  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.445654  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:13.454948  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:13.464245  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:13.526087  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:13.629597  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:13.629677  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:13.634535  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:13.634599  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:13.639122  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:13.675949  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:13.676043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.713216  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.752386  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:13.753755  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:13.771156  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:13.775524  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:13.788890  591333 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:28:13.789115  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:13.789184  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.863780  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.863811  591333 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:28:13.863873  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.900999  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.901021  591333 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:28:13.901028  591333 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:28:13.901149  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:13.901218  591333 ssh_runner.go:195] Run: crio config
	I0917 00:28:13.947330  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:13.947354  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:13.947367  591333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:28:13.947398  591333 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:28:13.947540  591333 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:28:13.947571  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:13.947618  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:13.962176  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:13.962288  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:13.962356  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:13.972318  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:13.972409  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:28:13.982775  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:28:14.003185  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:14.025114  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:28:14.043893  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0917 00:28:14.063914  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:14.067851  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:14.079495  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:14.146352  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:14.170001  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:28:14.170029  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:14.170049  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.170209  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:14.170248  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:14.170258  591333 certs.go:256] generating profile certs ...
	I0917 00:28:14.170312  591333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:14.170334  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt with IP's: []
	I0917 00:28:14.258881  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt ...
	I0917 00:28:14.258912  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt: {Name:mkf356a325e81df463620a9a59f1e19636a8bbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259129  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key ...
	I0917 00:28:14.259150  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key: {Name:mka2338ec2b6b28954ea0ef14eeb3d06111be43d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259268  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444
	I0917 00:28:14.259285  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0917 00:28:14.420479  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 ...
	I0917 00:28:14.420509  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444: {Name:mkcf98c32344d33f146459467ae0b529b09930e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420720  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 ...
	I0917 00:28:14.420744  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444: {Name:mk2a9dddb825d571b4beb46eeddb7582f0b5a38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420868  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:14.420963  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:14.421066  591333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:14.421086  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt with IP's: []
	I0917 00:28:14.667928  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt ...
	I0917 00:28:14.667965  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt: {Name:mk8fc3d9cf0ef31fe8163e3202ec93ff4212c0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668186  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key ...
	I0917 00:28:14.668205  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key: {Name:mk4aadb37423b11008cecd193572dcb26f4156f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668320  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:14.668341  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:14.668351  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:14.668364  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:14.668375  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:14.668386  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:14.668408  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:14.668420  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:14.668487  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:14.668524  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:14.668533  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:14.668554  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:14.668631  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:14.668666  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:14.668710  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:14.668747  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:14.668764  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:14.668780  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.669300  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:14.695942  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:14.721853  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:14.746954  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:14.773182  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:28:14.798782  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:14.823720  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:14.847907  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:14.872531  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:14.900554  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:14.925365  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:14.953903  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:28:14.973565  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:14.979257  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:14.989070  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992786  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992847  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.999827  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:15.009762  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:15.019180  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022635  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022690  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.029591  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:15.039107  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:15.048628  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052181  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052230  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.058893  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:15.069771  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:15.073670  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:15.073738  591333 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:15.073818  591333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:28:15.073904  591333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:28:15.110504  591333 cri.go:89] found id: ""
	I0917 00:28:15.110589  591333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:28:15.119903  591333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:28:15.129328  591333 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:28:15.129384  591333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:28:15.138492  591333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:28:15.138510  591333 kubeadm.go:157] found existing configuration files:
	
	I0917 00:28:15.138563  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:28:15.147903  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:28:15.147969  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:28:15.157062  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:28:15.166583  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:28:15.166646  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:28:15.176378  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.185922  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:28:15.185988  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.195234  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:28:15.204565  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:28:15.204624  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:28:15.213513  591333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:28:15.268809  591333 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 00:28:15.322273  591333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:28:25.344526  591333 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:28:25.344586  591333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:28:25.344654  591333 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:28:25.344699  591333 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 00:28:25.344758  591333 kubeadm.go:310] OS: Linux
	I0917 00:28:25.344813  591333 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:28:25.344864  591333 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:28:25.344910  591333 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:28:25.344953  591333 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:28:25.345000  591333 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:28:25.345048  591333 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:28:25.345119  591333 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:28:25.345192  591333 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 00:28:25.345263  591333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:28:25.345346  591333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:28:25.345452  591333 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:28:25.345508  591333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:28:25.347069  591333 out.go:252]   - Generating certificates and keys ...
	I0917 00:28:25.347143  591333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:28:25.347233  591333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:28:25.347311  591333 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:28:25.347369  591333 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:28:25.347468  591333 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:28:25.347518  591333 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:28:25.347562  591333 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:28:25.347663  591333 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.347707  591333 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:28:25.347846  591333 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.348037  591333 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:28:25.348142  591333 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:28:25.348209  591333 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:28:25.348278  591333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:28:25.348323  591333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:28:25.348380  591333 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:28:25.348445  591333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:28:25.348531  591333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:28:25.348623  591333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:28:25.348735  591333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:28:25.348831  591333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:28:25.351075  591333 out.go:252]   - Booting up control plane ...
	I0917 00:28:25.351182  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:28:25.351283  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:28:25.351361  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:28:25.351548  591333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:28:25.351700  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:28:25.351849  591333 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:28:25.351934  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:28:25.351970  591333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:28:25.352082  591333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:28:25.352189  591333 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:28:25.352283  591333 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00103693s
	I0917 00:28:25.352386  591333 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:28:25.352498  591333 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0917 00:28:25.352576  591333 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:28:25.352659  591333 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:28:25.352745  591333 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.008701955s
	I0917 00:28:25.352807  591333 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.208053254s
	I0917 00:28:25.352891  591333 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.501882009s
	I0917 00:28:25.352984  591333 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:28:25.353099  591333 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:28:25.353159  591333 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:28:25.353326  591333 kubeadm.go:310] [mark-control-plane] Marking the node ha-671025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:28:25.353381  591333 kubeadm.go:310] [bootstrap-token] Using token: 945t58.lx3tewj0v31y7u2l
	I0917 00:28:25.354623  591333 out.go:252]   - Configuring RBAC rules ...
	I0917 00:28:25.354715  591333 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:28:25.354845  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:28:25.355014  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:28:25.355187  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:28:25.355345  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:28:25.355454  591333 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:28:25.355574  591333 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:28:25.355621  591333 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:28:25.355662  591333 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:28:25.355668  591333 kubeadm.go:310] 
	I0917 00:28:25.355718  591333 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:28:25.355727  591333 kubeadm.go:310] 
	I0917 00:28:25.355804  591333 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:28:25.355810  591333 kubeadm.go:310] 
	I0917 00:28:25.355831  591333 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:28:25.355911  591333 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:28:25.355972  591333 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:28:25.355979  591333 kubeadm.go:310] 
	I0917 00:28:25.356051  591333 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:28:25.356065  591333 kubeadm.go:310] 
	I0917 00:28:25.356135  591333 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:28:25.356143  591333 kubeadm.go:310] 
	I0917 00:28:25.356220  591333 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:28:25.356331  591333 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:28:25.356455  591333 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:28:25.356470  591333 kubeadm.go:310] 
	I0917 00:28:25.356549  591333 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:28:25.356635  591333 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:28:25.356643  591333 kubeadm.go:310] 
	I0917 00:28:25.356717  591333 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.356829  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 00:28:25.356858  591333 kubeadm.go:310] 	--control-plane 
	I0917 00:28:25.356865  591333 kubeadm.go:310] 
	I0917 00:28:25.356941  591333 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:28:25.356947  591333 kubeadm.go:310] 
	I0917 00:28:25.357048  591333 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.357188  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 00:28:25.357207  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:25.357216  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:25.358901  591333 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 00:28:25.360097  591333 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 00:28:25.364931  591333 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 00:28:25.364953  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 00:28:25.387094  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 00:28:25.613643  591333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:28:25.613728  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:25.613746  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025 minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=true
	I0917 00:28:25.624073  591333 ops.go:34] apiserver oom_adj: -16
	I0917 00:28:25.696361  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.196672  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.696850  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.197218  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.696539  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.196491  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.696543  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.196814  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.696595  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.196581  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.273337  591333 kubeadm.go:1105] duration metric: took 4.659672583s to wait for elevateKubeSystemPrivileges
	I0917 00:28:30.273483  591333 kubeadm.go:394] duration metric: took 15.19974193s to StartCluster
	I0917 00:28:30.273523  591333 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.273607  591333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:30.274607  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.274913  591333 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.274945  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:28:30.274948  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:28:30.274965  591333 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:28:30.275045  591333 addons.go:69] Setting storage-provisioner=true in profile "ha-671025"
	I0917 00:28:30.275085  591333 addons.go:238] Setting addon storage-provisioner=true in "ha-671025"
	I0917 00:28:30.275129  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.275048  591333 addons.go:69] Setting default-storageclass=true in profile "ha-671025"
	I0917 00:28:30.275164  591333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-671025"
	I0917 00:28:30.275205  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.275523  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.275665  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.298018  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:28:30.298668  591333 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:28:30.298695  591333 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:28:30.298702  591333 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:28:30.298708  591333 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:28:30.298714  591333 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:28:30.298802  591333 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:28:30.299193  591333 addons.go:238] Setting addon default-storageclass=true in "ha-671025"
	I0917 00:28:30.299247  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.299354  591333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:28:30.299784  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.300585  591333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.300605  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:28:30.300669  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.319752  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.321070  591333 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.321101  591333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:28:30.321165  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.347717  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.362789  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:28:30.443108  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.467358  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.541692  591333 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 00:28:30.788755  591333 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:28:30.790283  591333 addons.go:514] duration metric: took 515.302961ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:28:30.790337  591333 start.go:246] waiting for cluster config update ...
	I0917 00:28:30.790355  591333 start.go:255] writing updated cluster config ...
	I0917 00:28:30.792167  591333 out.go:203] 
	I0917 00:28:30.794434  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.794553  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.797029  591333 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:28:30.798740  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:30.800340  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:30.801532  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:30.801576  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:30.801656  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:30.801701  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:30.801721  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:30.801837  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.826923  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:30.826950  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:30.826970  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:30.827006  591333 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:30.827168  591333 start.go:364] duration metric: took 135.604µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:28:30.827198  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.827285  591333 start.go:125] createHost starting for "m02" (driver="docker")
	I0917 00:28:30.829869  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:30.830019  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:30.830056  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:30.830117  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:30.830162  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830180  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830241  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:30.830266  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830274  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830527  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:30.850687  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc0018d10b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:28:30.850727  591333 kic.go:121] calculated static IP "192.168.49.3" for the "ha-671025-m02" container
	I0917 00:28:30.850801  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:30.869737  591333 cli_runner.go:164] Run: docker volume create ha-671025-m02 --label name.minikube.sigs.k8s.io=ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:30.890468  591333 oci.go:103] Successfully created a docker volume ha-671025-m02
	I0917 00:28:30.890596  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --entrypoint /usr/bin/test -v ha-671025-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:31.278702  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m02
	I0917 00:28:31.278750  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:31.278777  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:31.278882  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:35.682273  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.403350864s)
	I0917 00:28:35.682311  591333 kic.go:203] duration metric: took 4.403531688s to extract preloaded images to volume ...
	W0917 00:28:35.682411  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:35.682448  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:35.682488  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:35.742164  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m02 --name ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m02 --network ha-671025 --ip 192.168.49.3 --volume ha-671025-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:36.033045  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Running}}
	I0917 00:28:36.053351  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.072949  591333 cli_runner.go:164] Run: docker exec ha-671025-m02 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:36.126815  591333 oci.go:144] the created container "ha-671025-m02" has a running status.
	I0917 00:28:36.126844  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa...
	I0917 00:28:36.161749  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:36.161792  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:36.189714  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.212082  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:36.212109  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:36.260306  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.282829  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:36.282954  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:36.312073  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:36.312435  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:36.312461  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:36.313226  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47290->127.0.0.1:33153: read: connection reset by peer
	I0917 00:28:39.452508  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.452557  591333 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:28:39.452652  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.472236  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.472561  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.472581  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:28:39.626427  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.626517  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.645919  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.646146  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.646163  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:39.786717  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:39.786756  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:39.786781  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:39.786798  591333 provision.go:84] configureAuth start
	I0917 00:28:39.786974  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:39.807773  591333 provision.go:143] copyHostCerts
	I0917 00:28:39.807815  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807847  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:39.807858  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807932  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:39.808029  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808050  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:39.808054  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808081  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:39.808149  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808167  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:39.808172  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808200  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:39.808255  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:28:39.918454  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:39.918537  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:39.918589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.937978  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.039160  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:40.039233  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:40.069797  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:40.069887  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:28:40.098311  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:40.098408  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:40.127419  591333 provision.go:87] duration metric: took 340.575644ms to configureAuth
	I0917 00:28:40.127458  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:40.127656  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:40.127785  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.147026  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:40.147308  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:40.147331  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:40.409609  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:40.409640  591333 machine.go:96] duration metric: took 4.1267811s to provisionDockerMachine
	I0917 00:28:40.409651  591333 client.go:171] duration metric: took 9.579589798s to LocalClient.Create
	I0917 00:28:40.409674  591333 start.go:167] duration metric: took 9.579655281s to libmachine.API.Create "ha-671025"
	I0917 00:28:40.409684  591333 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:28:40.409696  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:40.409769  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:40.409816  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.431881  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.535836  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:40.540091  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:40.540127  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:40.540134  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:40.540141  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:40.540153  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:40.540203  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:40.540294  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:40.540310  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:40.540600  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:40.551220  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:40.582236  591333 start.go:296] duration metric: took 172.533526ms for postStartSetup
	I0917 00:28:40.582728  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.602550  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:40.602895  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:40.602973  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.625331  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.720887  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:40.725796  591333 start.go:128] duration metric: took 9.898487722s to createHost
	I0917 00:28:40.725827  591333 start.go:83] releasing machines lock for "ha-671025-m02", held for 9.89864483s
	I0917 00:28:40.725898  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.749075  591333 out.go:179] * Found network options:
	I0917 00:28:40.750936  591333 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:28:40.752439  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:28:40.752503  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:28:40.752575  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:40.752624  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.752703  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:40.752776  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.774163  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.775400  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:41.009369  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:41.014989  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.040280  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:41.040373  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.077837  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:41.077864  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:41.077899  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:41.077939  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:41.098363  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:41.112692  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:41.112768  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:41.128481  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:41.145954  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:41.216259  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:41.293618  591333 docker.go:234] disabling docker service ...
	I0917 00:28:41.293683  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:41.314463  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:41.327805  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:41.402097  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:41.515728  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:41.528751  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:41.548638  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:41.548717  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.563770  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:41.563842  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.575236  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.586559  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.599824  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:41.612614  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.624744  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.645749  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.659897  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:41.670457  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:41.680684  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:41.816654  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:41.923179  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:41.923241  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:41.927246  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:41.927309  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:41.931155  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:41.970363  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:41.970470  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.009043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.057831  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:42.059352  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:28:42.061008  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:42.081413  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:42.086716  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:42.100745  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:28:42.100976  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:42.101278  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:42.124810  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:42.125292  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:28:42.125333  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:42.125361  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:42.125545  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:42.125614  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:42.125626  591333 certs.go:256] generating profile certs ...
	I0917 00:28:42.125787  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:42.125831  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c
	I0917 00:28:42.125848  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:28:43.131520  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c ...
	I0917 00:28:43.131559  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c: {Name:mk97bbbbe985039a36a56311ec983801d49afc24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131793  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c ...
	I0917 00:28:43.131814  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c: {Name:mk2a126624b47a1fbca817c2bf7b065e9ee5a854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131938  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:43.132097  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:43.132233  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:43.132252  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:43.132265  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:43.132275  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:43.132286  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:43.132296  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:43.132308  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:43.132318  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:43.132330  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:43.132385  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:43.132425  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:43.132435  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:43.132458  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:43.132480  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:43.132500  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:43.132536  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:43.132561  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.132576  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.132588  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.132646  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:43.152207  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:43.242834  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:28:43.247724  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:28:43.261684  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:28:43.265651  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:28:43.279426  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:28:43.283200  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:28:43.298316  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:28:43.302656  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:28:43.316567  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:28:43.320915  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:28:43.334735  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:28:43.339251  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:28:43.354686  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:43.382622  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:43.411140  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:43.439208  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:43.468797  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 00:28:43.497239  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:43.525628  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:43.552854  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:43.579567  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:43.613480  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:43.640927  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:43.668098  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:28:43.688016  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:28:43.709638  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:28:43.729987  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:28:43.751570  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:28:43.772873  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:28:43.793231  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:28:43.813996  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:43.820372  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:43.831827  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836450  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836601  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.845799  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:43.858335  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:43.870361  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874499  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874557  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.882167  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:43.894006  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:43.906727  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910868  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910926  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.918600  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:43.930014  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:43.933717  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:43.933786  591333 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:28:43.933892  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:43.933920  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:43.933956  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:43.949251  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:43.949348  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:43.949436  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:43.959785  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:43.959858  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:28:43.970815  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:28:43.992525  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:44.016479  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:28:44.038080  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:44.042531  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:44.055802  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:44.123804  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:44.146604  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:44.146887  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:44.146991  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:28:44.147052  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:44.166636  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:44.318607  591333 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:44.318654  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0917 00:29:01.319807  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.001126344s)
	I0917 00:29:01.319840  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:01.532514  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m02 minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:01.623743  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:01.704118  591333 start.go:319] duration metric: took 17.557224287s to joinCluster
	I0917 00:29:01.704207  591333 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:01.704539  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:01.705687  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:01.707014  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:01.810630  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:01.824161  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:01.824231  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:01.824550  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	W0917 00:29:03.828446  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:05.829871  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:08.329045  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:10.828964  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:13.328972  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:15.828569  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	I0917 00:29:16.328859  591333 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:29:16.328891  591333 node_ready.go:38] duration metric: took 14.504319776s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:29:16.328908  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:16.328959  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:16.341005  591333 api_server.go:72] duration metric: took 14.636761134s to wait for apiserver process to appear ...
	I0917 00:29:16.341029  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:16.341048  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:16.345248  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:16.346148  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:16.346174  591333 api_server.go:131] duration metric: took 5.137742ms to wait for apiserver health ...
	I0917 00:29:16.346183  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:16.351147  591333 system_pods.go:59] 17 kube-system pods found
	I0917 00:29:16.351175  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.351180  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.351184  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.351187  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.351190  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.351194  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.351198  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.351203  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.351206  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.351210  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.351213  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.351216  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.351219  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.351222  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.351225  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.351227  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.351230  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.351235  591333 system_pods.go:74] duration metric: took 5.047428ms to wait for pod list to return data ...
	I0917 00:29:16.351245  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:16.354087  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:16.354107  591333 default_sa.go:55] duration metric: took 2.857135ms for default service account to be created ...
	I0917 00:29:16.354115  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:16.357519  591333 system_pods.go:86] 17 kube-system pods found
	I0917 00:29:16.357544  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.357550  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.357555  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.357560  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.357565  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.357570  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.357576  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.357582  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.357591  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.357599  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.357605  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.357611  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.357614  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.357619  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.357623  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.357630  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.357633  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.357642  591333 system_pods.go:126] duration metric: took 3.522377ms to wait for k8s-apps to be running ...
	I0917 00:29:16.357652  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:16.357710  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:16.370259  591333 system_svc.go:56] duration metric: took 12.594604ms WaitForService to wait for kubelet
	I0917 00:29:16.370292  591333 kubeadm.go:578] duration metric: took 14.666051199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:16.370351  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:16.373484  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373509  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373526  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373531  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373545  591333 node_conditions.go:105] duration metric: took 3.187263ms to run NodePressure ...
	I0917 00:29:16.373563  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:16.373599  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:16.375540  591333 out.go:203] 
	I0917 00:29:16.376982  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:16.377123  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.378689  591333 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:29:16.380127  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:29:16.381271  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:29:16.382178  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.382203  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:29:16.382278  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:29:16.382305  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:29:16.382314  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:29:16.382434  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.405280  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:29:16.405301  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:29:16.405319  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:29:16.405349  591333 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:29:16.405476  591333 start.go:364] duration metric: took 109.564µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:29:16.405502  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:16.405601  591333 start.go:125] createHost starting for "m03" (driver="docker")
	I0917 00:29:16.408212  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:29:16.408326  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:29:16.408364  591333 client.go:168] LocalClient.Create starting
	I0917 00:29:16.408459  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:29:16.408501  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408515  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408569  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:29:16.408588  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408596  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408797  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:16.428129  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc001a2abd0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:29:16.428169  591333 kic.go:121] calculated static IP "192.168.49.4" for the "ha-671025-m03" container
	I0917 00:29:16.428233  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:29:16.447362  591333 cli_runner.go:164] Run: docker volume create ha-671025-m03 --label name.minikube.sigs.k8s.io=ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:29:16.467514  591333 oci.go:103] Successfully created a docker volume ha-671025-m03
	I0917 00:29:16.467629  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --entrypoint /usr/bin/test -v ha-671025-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:29:16.870641  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m03
	I0917 00:29:16.870686  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.870713  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:29:16.870789  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:29:21.201351  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.33049988s)
	I0917 00:29:21.201386  591333 kic.go:203] duration metric: took 4.330670212s to extract preloaded images to volume ...
	W0917 00:29:21.201499  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:29:21.201529  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:29:21.201570  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:29:21.257059  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m03 --name ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m03 --network ha-671025 --ip 192.168.49.4 --volume ha-671025-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:29:21.526231  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Running}}
	I0917 00:29:21.546352  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.567256  591333 cli_runner.go:164] Run: docker exec ha-671025-m03 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:29:21.619083  591333 oci.go:144] the created container "ha-671025-m03" has a running status.
	I0917 00:29:21.619117  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa...
	I0917 00:29:21.831158  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:29:21.831204  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:29:21.864081  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.886560  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:29:21.886587  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:29:21.939241  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.960815  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:29:21.961005  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:21.982259  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:21.982549  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:21.982571  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:29:22.123516  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.123558  591333 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:29:22.123633  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.143852  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.144070  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.144083  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:29:22.298146  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.298229  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.317607  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.317851  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.317875  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:29:22.455839  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:29:22.455874  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:29:22.455894  591333 ubuntu.go:190] setting up certificates
	I0917 00:29:22.455908  591333 provision.go:84] configureAuth start
	I0917 00:29:22.455983  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:22.474745  591333 provision.go:143] copyHostCerts
	I0917 00:29:22.474791  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474821  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:29:22.474830  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474900  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:29:22.474988  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475015  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:29:22.475028  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475061  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:29:22.475116  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475134  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:29:22.475141  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475164  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:29:22.475216  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:29:22.562518  591333 provision.go:177] copyRemoteCerts
	I0917 00:29:22.562597  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:29:22.562645  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.582491  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:22.681516  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:29:22.681585  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:29:22.711977  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:29:22.712070  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:29:22.739378  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:29:22.739454  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:29:22.767225  591333 provision.go:87] duration metric: took 311.299307ms to configureAuth
	I0917 00:29:22.767254  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:29:22.767513  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:22.767641  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.787106  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.787322  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.787337  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:29:23.027585  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:29:23.027618  591333 machine.go:96] duration metric: took 1.066782115s to provisionDockerMachine
	I0917 00:29:23.027629  591333 client.go:171] duration metric: took 6.619257203s to LocalClient.Create
	I0917 00:29:23.027644  591333 start.go:167] duration metric: took 6.619319411s to libmachine.API.Create "ha-671025"
	I0917 00:29:23.027653  591333 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:29:23.027665  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:29:23.027739  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:29:23.027789  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.048535  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.148623  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:29:23.152295  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:29:23.152333  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:29:23.152344  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:29:23.152354  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:29:23.152402  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:29:23.152478  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:29:23.152577  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:29:23.152589  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:29:23.152698  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:29:23.162366  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:23.192510  591333 start.go:296] duration metric: took 164.839418ms for postStartSetup
	I0917 00:29:23.192875  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.211261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:23.211545  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:29:23.211589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.228367  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.323873  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:29:23.328453  591333 start.go:128] duration metric: took 6.922836798s to createHost
	I0917 00:29:23.328480  591333 start.go:83] releasing machines lock for "ha-671025-m03", held for 6.9229927s
	I0917 00:29:23.328559  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.348699  591333 out.go:179] * Found network options:
	I0917 00:29:23.350091  591333 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:29:23.351262  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351286  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351307  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351319  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:29:23.351413  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:29:23.351457  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.351483  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:29:23.351555  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.370656  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.370963  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.603202  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:29:23.608556  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.632987  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:29:23.633078  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.665413  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:29:23.665445  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:29:23.665479  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:29:23.665582  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:29:23.682513  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:29:23.695198  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:29:23.695265  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:29:23.710235  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:29:23.725450  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:29:23.796030  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:29:23.870255  591333 docker.go:234] disabling docker service ...
	I0917 00:29:23.870317  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:29:23.889003  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:29:23.901613  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:29:23.973987  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:29:24.138099  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:29:24.150712  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:29:24.168641  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:29:24.168702  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.181874  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:29:24.181936  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.193571  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.204646  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.215806  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:29:24.225866  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.236708  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.254758  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.266984  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:29:24.276695  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:29:24.286587  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:24.356850  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:29:24.461065  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:29:24.461156  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:29:24.465833  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:29:24.465903  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:29:24.469817  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:29:24.506319  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:29:24.506419  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.544050  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.583372  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:29:24.584727  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:29:24.586235  591333 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:29:24.587662  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:24.605611  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:29:24.610151  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:24.622865  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:29:24.623090  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:24.623289  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:29:24.641474  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:24.641732  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:29:24.641743  591333 certs.go:194] generating shared ca certs ...
	I0917 00:29:24.641758  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.641894  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:29:24.641944  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:29:24.641954  591333 certs.go:256] generating profile certs ...
	I0917 00:29:24.642025  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:29:24.642065  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:29:24.642081  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:29:24.856212  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 ...
	I0917 00:29:24.856249  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7: {Name:mk65d29cf7ba29b99ab2056d134a2884f928fccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856490  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 ...
	I0917 00:29:24.856512  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7: {Name:mkd89da6d4d9fb3421e5c7677b39452bd32f11a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856628  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:29:24.856803  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:29:24.856940  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:29:24.856957  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:29:24.856970  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:29:24.856984  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:29:24.857022  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:29:24.857038  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:29:24.857051  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:29:24.857063  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:29:24.857073  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:29:24.857137  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:29:24.857169  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:29:24.857179  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:29:24.857203  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:29:24.857236  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:29:24.857259  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:29:24.857298  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:24.857323  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:24.857336  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:29:24.857410  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:29:24.857487  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:24.876681  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:24.965759  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:29:24.970077  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:29:24.983505  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:29:24.987459  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:29:25.001249  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:29:25.005139  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:29:25.019000  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:29:25.023277  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:29:25.037665  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:29:25.041486  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:29:25.056004  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:29:25.060379  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:29:25.075527  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:29:25.103048  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:29:25.130436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:29:25.156335  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:29:25.183962  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0917 00:29:25.210290  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:29:25.237850  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:29:25.264713  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:29:25.292266  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:29:25.322436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:29:25.349159  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:29:25.376714  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:29:25.397066  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:29:25.416141  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:29:25.436031  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:29:25.455195  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:29:25.475694  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:29:25.494981  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:29:25.514182  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:29:25.519757  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:29:25.530366  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534300  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534372  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.541463  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:29:25.551798  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:29:25.562696  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566820  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566898  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.575288  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:29:25.585578  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:29:25.596219  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.599949  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.600000  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.608220  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:29:25.620163  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:29:25.623987  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:29:25.624048  591333 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:29:25.624137  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:29:25.624164  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:29:25.624201  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:29:25.637994  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:29:25.638073  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:29:25.638135  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:29:25.647722  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:29:25.647792  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:29:25.658193  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:29:25.679949  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:29:25.703178  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:29:25.726279  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:29:25.730482  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:25.743251  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:25.813167  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:25.837618  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:25.837905  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:29:25.838070  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:29:25.838130  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:25.859495  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:26.008672  591333 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:26.008736  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0917 00:29:38.691373  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (12.682606276s)
	I0917 00:29:38.691443  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:38.941535  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m03 minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:39.021358  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:39.107652  591333 start.go:319] duration metric: took 13.269740721s to joinCluster
	I0917 00:29:39.107734  591333 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:39.108038  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:39.109032  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:39.110170  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:39.212840  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:39.228175  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:39.228249  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:39.228513  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	W0917 00:29:41.232779  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:43.732906  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:46.232976  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:48.732961  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:51.232362  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	I0917 00:29:51.732347  591333 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:29:51.732379  591333 node_ready.go:38] duration metric: took 12.503848437s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:29:51.732413  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:51.732477  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:51.745126  591333 api_server.go:72] duration metric: took 12.637355364s to wait for apiserver process to appear ...
	I0917 00:29:51.745157  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:51.745182  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:51.751075  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:51.752025  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:51.752049  591333 api_server.go:131] duration metric: took 6.885054ms to wait for apiserver health ...
	I0917 00:29:51.752060  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:51.758905  591333 system_pods.go:59] 24 kube-system pods found
	I0917 00:29:51.758940  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.758949  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.758957  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.758963  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.758968  591333 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.758973  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.758978  591333 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.758990  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.758995  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.759000  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.759004  591333 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.759009  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.759018  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.759023  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.759027  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.759035  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.759039  591333 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.759049  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.759054  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.759058  591333 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.759066  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.759070  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.759075  591333 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.759079  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.759086  591333 system_pods.go:74] duration metric: took 7.019861ms to wait for pod list to return data ...
	I0917 00:29:51.759106  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:51.761820  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:51.761841  591333 default_sa.go:55] duration metric: took 2.726063ms for default service account to be created ...
	I0917 00:29:51.761850  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:51.766999  591333 system_pods.go:86] 24 kube-system pods found
	I0917 00:29:51.767031  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.767037  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.767041  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.767044  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.767047  591333 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.767050  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.767053  591333 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.767057  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.767060  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.767062  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.767066  591333 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.767069  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.767072  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.767075  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.767078  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.767081  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.767084  591333 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.767087  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.767089  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.767093  591333 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.767095  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.767099  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.767105  591333 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.767108  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.767115  591333 system_pods.go:126] duration metric: took 5.259145ms to wait for k8s-apps to be running ...
	I0917 00:29:51.767125  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:51.767173  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:51.780761  591333 system_svc.go:56] duration metric: took 13.623242ms WaitForService to wait for kubelet
	I0917 00:29:51.780795  591333 kubeadm.go:578] duration metric: took 12.673026165s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:51.780819  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:51.783987  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784014  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784059  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784065  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784075  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784081  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784090  591333 node_conditions.go:105] duration metric: took 3.264516ms to run NodePressure ...
	I0917 00:29:51.784106  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:51.784138  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:51.784529  591333 ssh_runner.go:195] Run: rm -f paused
	I0917 00:29:51.788748  591333 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:51.789284  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:29:51.792784  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.797966  591333 pod_ready.go:94] pod "coredns-66bc5c9577-mqh24" is "Ready"
	I0917 00:29:51.797991  591333 pod_ready.go:86] duration metric: took 5.183268ms for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.798004  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.802611  591333 pod_ready.go:94] pod "coredns-66bc5c9577-vfj56" is "Ready"
	I0917 00:29:51.802634  591333 pod_ready.go:86] duration metric: took 4.623535ms for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.805006  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809379  591333 pod_ready.go:94] pod "etcd-ha-671025" is "Ready"
	I0917 00:29:51.809416  591333 pod_ready.go:86] duration metric: took 4.389649ms for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809427  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813691  591333 pod_ready.go:94] pod "etcd-ha-671025-m02" is "Ready"
	I0917 00:29:51.813712  591333 pod_ready.go:86] duration metric: took 4.279249ms for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813720  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.990174  591333 request.go:683] "Waited before sending request" delay="176.338354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671025-m03"
	I0917 00:29:52.190229  591333 request.go:683] "Waited before sending request" delay="196.333995ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:52.193665  591333 pod_ready.go:94] pod "etcd-ha-671025-m03" is "Ready"
	I0917 00:29:52.193693  591333 pod_ready.go:86] duration metric: took 379.968155ms for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.390210  591333 request.go:683] "Waited before sending request" delay="196.377999ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0917 00:29:52.394451  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.590608  591333 request.go:683] "Waited before sending request" delay="196.01886ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025"
	I0917 00:29:52.790098  591333 request.go:683] "Waited before sending request" delay="196.369455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:52.793544  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025" is "Ready"
	I0917 00:29:52.793578  591333 pod_ready.go:86] duration metric: took 399.098458ms for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.793595  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.990070  591333 request.go:683] "Waited before sending request" delay="196.355614ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m02"
	I0917 00:29:53.190086  591333 request.go:683] "Waited before sending request" delay="196.360413ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:53.193284  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m02" is "Ready"
	I0917 00:29:53.193311  591333 pod_ready.go:86] duration metric: took 399.708595ms for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.193320  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.390584  591333 request.go:683] "Waited before sending request" delay="197.147317ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m03"
	I0917 00:29:53.590103  591333 request.go:683] "Waited before sending request" delay="196.290111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:53.593362  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m03" is "Ready"
	I0917 00:29:53.593412  591333 pod_ready.go:86] duration metric: took 400.084881ms for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.790733  591333 request.go:683] "Waited before sending request" delay="197.180718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0917 00:29:53.794548  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.989879  591333 request.go:683] "Waited before sending request" delay="195.193469ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025"
	I0917 00:29:54.190518  591333 request.go:683] "Waited before sending request" delay="197.369336ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:54.194152  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025" is "Ready"
	I0917 00:29:54.194183  591333 pod_ready.go:86] duration metric: took 399.607782ms for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.194195  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.390598  591333 request.go:683] "Waited before sending request" delay="196.290873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m02"
	I0917 00:29:54.590577  591333 request.go:683] "Waited before sending request" delay="196.311056ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:54.594360  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m02" is "Ready"
	I0917 00:29:54.594432  591333 pod_ready.go:86] duration metric: took 400.227353ms for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.594445  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.789830  591333 request.go:683] "Waited before sending request" delay="195.263054ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m03"
	I0917 00:29:54.990466  591333 request.go:683] "Waited before sending request" delay="197.342033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:54.993759  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m03" is "Ready"
	I0917 00:29:54.993788  591333 pod_ready.go:86] duration metric: took 399.335381ms for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.190138  591333 request.go:683] "Waited before sending request" delay="196.195607ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0917 00:29:55.194060  591333 pod_ready.go:83] waiting for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.390543  591333 request.go:683] "Waited before sending request" delay="196.36227ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4k8lz"
	I0917 00:29:55.590492  591333 request.go:683] "Waited before sending request" delay="196.425967ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:55.593719  591333 pod_ready.go:94] pod "kube-proxy-4k8lz" is "Ready"
	I0917 00:29:55.593746  591333 pod_ready.go:86] duration metric: took 399.654072ms for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.593753  591333 pod_ready.go:83] waiting for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.790222  591333 request.go:683] "Waited before sending request" delay="196.381687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f58dt"
	I0917 00:29:55.990078  591333 request.go:683] "Waited before sending request" delay="196.35386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:55.993537  591333 pod_ready.go:94] pod "kube-proxy-f58dt" is "Ready"
	I0917 00:29:55.993565  591333 pod_ready.go:86] duration metric: took 399.806033ms for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.993573  591333 pod_ready.go:83] waiting for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.190000  591333 request.go:683] "Waited before sending request" delay="196.348448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q96zd"
	I0917 00:29:56.390582  591333 request.go:683] "Waited before sending request" delay="197.229029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:56.393563  591333 pod_ready.go:94] pod "kube-proxy-q96zd" is "Ready"
	I0917 00:29:56.393592  591333 pod_ready.go:86] duration metric: took 400.012384ms for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.590057  591333 request.go:683] "Waited before sending request" delay="196.329973ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0917 00:29:56.593914  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.790433  591333 request.go:683] "Waited before sending request" delay="196.375831ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025"
	I0917 00:29:56.990073  591333 request.go:683] "Waited before sending request" delay="196.373603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:56.993259  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025" is "Ready"
	I0917 00:29:56.993288  591333 pod_ready.go:86] duration metric: took 399.350969ms for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.993297  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.190549  591333 request.go:683] "Waited before sending request" delay="197.173424ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m02"
	I0917 00:29:57.390069  591333 request.go:683] "Waited before sending request" delay="196.377477ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:57.393214  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m02" is "Ready"
	I0917 00:29:57.393243  591333 pod_ready.go:86] duration metric: took 399.939467ms for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.393254  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.590599  591333 request.go:683] "Waited before sending request" delay="197.214476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m03"
	I0917 00:29:57.790207  591333 request.go:683] "Waited before sending request" delay="196.332231ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:57.793613  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m03" is "Ready"
	I0917 00:29:57.793646  591333 pod_ready.go:86] duration metric: took 400.384119ms for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.793660  591333 pod_ready.go:40] duration metric: took 6.00487949s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:57.841958  591333 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:29:57.843747  591333 out.go:179] * Done! kubectl is now configured to use "ha-671025" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.206543981Z" level=info msg="Starting container: 1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e" id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.215619295Z" level=info msg="Started container" PID=2320 containerID=1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e description=kube-system/coredns-66bc5c9577-vfj56/coredns id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39dc71832b8bb399ba20ce48f2427629524276766208427b4f7705d2c0d5a7bc
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112704664Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112791033Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130623397Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130669888Z" level=info msg="Adding pod default_busybox-7b57f96db7-wj4r5 to CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142401777Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142574298Z" level=info msg="Checking pod default_busybox-7b57f96db7-wj4r5 for CNI network kindnet (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.143612429Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.144813443Z" level=info msg="Ran pod sandbox 6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f with infra container: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146339053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146578417Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.147237951Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.148635276Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.991719699Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.350447433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.351203929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.352357885Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.353373442Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.354669415Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.358933450Z" level=info msg="Creating container: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.359053527Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.435258478Z" level=info msg="Created container 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.436586730Z" level=info msg="Starting container: 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a" id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.446220694Z" level=info msg="Started container" PID=2585 containerID=7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a description=default/busybox-7b57f96db7-wj4r5/busybox id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f97d1a1e175b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   50 seconds ago      Running             busybox                   0                   6347f27b59723       busybox-7b57f96db7-wj4r5
	1b2322cca7366       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   0                   39dc71832b8bb       coredns-66bc5c9577-vfj56
	2f150c7f7dc18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       0                   f228c8ac21369       storage-provisioner
	4fd73d6446292       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   0                   92ca6f4389168       coredns-66bc5c9577-mqh24
	97d03ed4f05c2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago       Running             kindnet-cni               0                   ad7fd40f66a01       kindnet-9zvhz
	beeb8e61abad9       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      2 minutes ago       Running             kube-proxy                0                   527193be2b767       kube-proxy-f58dt
	ecb56d4cc4c88       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago       Running             kube-vip                  0                   852e4beaeede7       kube-vip-ha-671025
	7a41c39db49f4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      2 minutes ago       Running             kube-scheduler            0                   2a00cabb8a637       kube-scheduler-ha-671025
	d4e775bc05e92       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      2 minutes ago       Running             kube-apiserver            0                   e909c5565b688       kube-apiserver-ha-671025
	b966a80c48716       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      2 minutes ago       Running             kube-controller-manager   0                   9e2f63f3286f1       kube-controller-manager-ha-671025
	7819068a50e98       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      2 minutes ago       Running             etcd                      0                   985f7f1c3407d       etcd-ha-671025
	
	
	==> coredns [1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e] <==
	[INFO] 10.244.0.4:52527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231229s
	[INFO] 10.244.0.4:39416 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.0015558s
	[INFO] 10.244.0.4:45468 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000706318s
	[INFO] 10.244.0.4:53485 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000087472s
	[INFO] 10.244.1.2:37939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156622s
	[INFO] 10.244.1.2:47463 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000147027s
	[INFO] 10.244.2.2:34151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011555178s
	[INFO] 10.244.2.2:39096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.081855349s
	[INFO] 10.244.2.2:40937 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241541s
	[INFO] 10.244.0.4:56066 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205334s
	[INFO] 10.244.0.4:52703 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134531s
	[INFO] 10.244.0.4:56844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000105782s
	[INFO] 10.244.0.4:52436 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144945s
	[INFO] 10.244.1.2:42520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154899s
	[INFO] 10.244.1.2:36438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196498s
	[INFO] 10.244.2.2:42902 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170395s
	[INFO] 10.244.2.2:44897 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143905s
	[INFO] 10.244.0.4:59616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105243s
	[INFO] 10.244.1.2:39631 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002321s
	[INFO] 10.244.1.2:59007 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009976s
	[INFO] 10.244.2.2:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000146002s
	[INFO] 10.244.2.2:56762 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164207s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145402s
	[INFO] 10.244.0.4:37880 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097925s
	[INFO] 10.244.1.2:55010 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144896s
	
	
	==> coredns [4fd73d6446292f190b136d89cd25bf39fce256818f5056f6d2665d5e4fa5ebd5] <==
	[INFO] 10.244.2.2:37478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001401s
	[INFO] 10.244.0.4:32873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013759s
	[INFO] 10.244.0.4:37452 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006758446s
	[INFO] 10.244.0.4:53096 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156627s
	[INFO] 10.244.0.4:33933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125115s
	[INFO] 10.244.1.2:46463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000282565s
	[INFO] 10.244.1.2:39686 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00021884s
	[INFO] 10.244.1.2:54348 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01683783s
	[INFO] 10.244.1.2:54156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247643s
	[INFO] 10.244.1.2:51012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248315s
	[INFO] 10.244.1.2:49586 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095306s
	[INFO] 10.244.2.2:42847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150928s
	[INFO] 10.244.2.2:38291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000461737s
	[INFO] 10.244.0.4:57992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127693s
	[INFO] 10.244.0.4:53956 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219562s
	[INFO] 10.244.0.4:34480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117878s
	[INFO] 10.244.1.2:37372 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177692s
	[INFO] 10.244.1.2:44790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000227814s
	[INFO] 10.244.2.2:55057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193926s
	[INFO] 10.244.2.2:51005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158043s
	[INFO] 10.244.0.4:57976 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144447s
	[INFO] 10.244.0.4:45233 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113362s
	[INFO] 10.244.1.2:59399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116822s
	[INFO] 10.244.1.2:55814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105565s
	[INFO] 10.244.1.2:33844 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129758s
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:30:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf085e2718b148b5ad91c414953b197e
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m22s
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m22s
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m28s
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m28s                  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s                  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m28s                  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m23s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                2m11s                  kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           113s                   node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           76s                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:30:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d9e6a6baf694e3db7d6670efecf289a
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        107s  kube-proxy       
	  Normal  RegisteredNode  108s  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode  108s  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode  76s   node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	Name:               ha-671025-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:30:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-671025-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 660e9daa5dff498295dc0311dee374a4
	  System UUID:                ca019c4e-efee-45a1-854b-8ad90ea7fdf4
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dk9cf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 etcd-ha-671025-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         72s
	  kube-system                 kindnet-9w6f7                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-ha-671025-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-ha-671025-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-q96zd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-ha-671025-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-vip-ha-671025-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        71s   kube-proxy       
	  Normal  RegisteredNode  73s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  73s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  71s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [7819068a50e981a28f7aac6e0ffa00b30498aa7a8728f90c252a1dde8a63172c] <==
	{"level":"info","ts":"2025-09-17T00:29:31.065556Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:29:31.065590Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:29:31.067668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:29:31.084476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:29:31.100788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:45628","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:29:38.662149Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:29:42.031334Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:29:58.835058Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:29:58.991840Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-17T00:30:01.018301Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"58f1161d61ce118","bytes":1446419,"size":"1.4 MB","took":"30.017682684s"}
	{"level":"info","ts":"2025-09-17T00:30:09.501879Z","caller":"traceutil/trace.go:172","msg":"trace[2146072419] linearizableReadLoop","detail":"{readStateIndex:1188; appliedIndex:1188; }","duration":"141.203793ms","start":"2025-09-17T00:30:09.360647Z","end":"2025-09-17T00:30:09.501850Z","steps":["trace[2146072419] 'read index received'  (duration: 141.195963ms)","trace[2146072419] 'applied index is now lower than readState.Index'  (duration: 5.958µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:30:09.505268Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.27894ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040018158788372 > lease_revoke:<id:70cc995512839e0c>","response":"size:29"}
	{"level":"warn","ts":"2025-09-17T00:30:09.505347Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.683214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:09.505532Z","caller":"traceutil/trace.go:172","msg":"trace[500820100] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"139.040911ms","start":"2025-09-17T00:30:09.366470Z","end":"2025-09-17T00:30:09.505511Z","steps":["trace[500820100] 'process raft request'  (duration: 138.89516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:09.505551Z","caller":"traceutil/trace.go:172","msg":"trace[1619350159] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1004; }","duration":"144.895328ms","start":"2025-09-17T00:30:09.360635Z","end":"2025-09-17T00:30:09.505530Z","steps":["trace[1619350159] 'agreement among raft nodes before linearized reading'  (duration: 141.300792ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:30:09.778515Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.407706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:09.778612Z","caller":"traceutil/trace.go:172","msg":"trace[1181430234] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1005; }","duration":"170.522946ms","start":"2025-09-17T00:30:09.608073Z","end":"2025-09-17T00:30:09.778596Z","steps":["trace[1181430234] 'range keys from in-memory index tree'  (duration: 169.782684ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:26.742546Z","caller":"traceutil/trace.go:172","msg":"trace[1301104523] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1240; }","duration":"134.800942ms","start":"2025-09-17T00:30:26.607715Z","end":"2025-09-17T00:30:26.742516Z","steps":["trace[1301104523] 'read index received'  (duration: 134.794574ms)","trace[1301104523] 'applied index is now lower than readState.Index'  (duration: 5.057µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:30:26.742702Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.951869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:26.742764Z","caller":"traceutil/trace.go:172","msg":"trace[559742275] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1045; }","duration":"135.049537ms","start":"2025-09-17T00:30:26.607704Z","end":"2025-09-17T00:30:26.742754Z","steps":["trace[559742275] 'agreement among raft nodes before linearized reading'  (duration: 134.912912ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:26.742748Z","caller":"traceutil/trace.go:172","msg":"trace[1407010545] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"138.186392ms","start":"2025-09-17T00:30:26.604547Z","end":"2025-09-17T00:30:26.742734Z","steps":["trace[1407010545] 'process raft request'  (duration: 138.044509ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:30:27.284481Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"b65d66e84a12b94b","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"23.876704ms"}
	{"level":"warn","ts":"2025-09-17T00:30:27.284588Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"58f1161d61ce118","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"23.977845ms"}
	{"level":"info","ts":"2025-09-17T00:30:27.284875Z","caller":"traceutil/trace.go:172","msg":"trace[1317115850] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"128.236157ms","start":"2025-09-17T00:30:27.156624Z","end":"2025-09-17T00:30:27.284860Z","steps":["trace[1317115850] 'process raft request'  (duration: 128.097873ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:27.895598Z","caller":"traceutil/trace.go:172","msg":"trace[11920158] transaction","detail":"{read_only:false; response_revision:1050; number_of_response:1; }","duration":"148.026679ms","start":"2025-09-17T00:30:27.747545Z","end":"2025-09-17T00:30:27.895572Z","steps":["trace[11920158] 'process raft request'  (duration: 101.895012ms)","trace[11920158] 'compare'  (duration: 45.996426ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:30:52 up  3:13,  0 users,  load average: 0.90, 0.51, 5.12
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [97d03ed4f05c2c8a7edb2014248bdbf3d9cfbee7da82980f69fec92e92471166] <==
	I0917 00:30:11.204810       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:21.212489       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:21.212536       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:21.212827       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:21.212840       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:21.212973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:21.212983       1 main.go:301] handling current node
	I0917 00:30:31.203606       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:31.203652       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:31.203966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:31.203990       1 main.go:301] handling current node
	I0917 00:30:31.204009       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:31.204015       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:41.203515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:41.203557       1 main.go:301] handling current node
	I0917 00:30:41.203599       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:41.203604       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:41.203792       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:41.203806       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:51.212617       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:51.212663       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:51.212861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:51.212872       1 main.go:301] handling current node
	I0917 00:30:51.212888       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:51.212893       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [d4e775bc05e92406988cf96c77fa7e581cfe8cc2f3f70e1efc89c2ec23a63e4a] <==
	I0917 00:28:24.325254       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:28:24.746459       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:28:24.756910       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0917 00:28:24.764710       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:28:29.928906       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:29.932824       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:30.328091       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0917 00:28:30.429040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:29:34.977143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:29:44.951924       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:30:02.333807       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45142: use of closed network connection
	E0917 00:30:02.515957       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45160: use of closed network connection
	E0917 00:30:02.696738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45172: use of closed network connection
	E0917 00:30:02.975357       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45188: use of closed network connection
	E0917 00:30:03.163201       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45206: use of closed network connection
	E0917 00:30:03.360510       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45214: use of closed network connection
	E0917 00:30:03.537260       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45238: use of closed network connection
	E0917 00:30:03.723220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45262: use of closed network connection
	E0917 00:30:03.899588       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45288: use of closed network connection
	E0917 00:30:04.199638       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45314: use of closed network connection
	E0917 00:30:04.375427       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45330: use of closed network connection
	E0917 00:30:04.546665       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45360: use of closed network connection
	E0917 00:30:04.718966       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45380: use of closed network connection
	E0917 00:30:04.893333       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45402: use of closed network connection
	E0917 00:30:05.069202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45414: use of closed network connection
	
	
	==> kube-controller-manager [b966a80c487167a8ef5e8ce7981e5a50b500e5d8ce6a71e00ed74b342da31465] <==
	I0917 00:28:29.324302       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:28:29.324327       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:28:29.324356       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:28:29.325297       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0917 00:28:29.325324       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:28:29.325364       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0917 00:28:29.325335       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:28:29.325427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:28:29.326766       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:28:29.333261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:28:29.333638       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:29.333657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:28:29.333665       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:28:29.340961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:28:29.343294       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:28:29.353739       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:44.313285       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0917 00:29:00.309163       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-g7wk8 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-g7wk8\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:00.997925       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m02\" does not exist"
	I0917 00:29:01.017089       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m02" podCIDRs=["10.244.1.0/24"]
	I0917 00:29:04.315749       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m02"
	E0917 00:29:37.100559       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-4vrlk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-4vrlk\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:38.581695       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m03\" does not exist"
	I0917 00:29:38.589924       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m03" podCIDRs=["10.244.2.0/24"]
	I0917 00:29:39.436557       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m03"
	
	
	==> kube-proxy [beeb8e61abad9cff9c53d8b6d7bd473fa1b23bbe18bf4739d34ffc8956376ff2] <==
	I0917 00:28:30.830323       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:28:30.891652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:28:30.992026       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:28:30.992089       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:28:30.992227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:28:31.013108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:28:31.013179       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:28:31.018687       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:28:31.019218       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:28:31.019253       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:28:31.020737       1 config.go:200] "Starting service config controller"
	I0917 00:28:31.020764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:28:31.020800       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:28:31.020809       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:28:31.020897       1 config.go:309] "Starting node config controller"
	I0917 00:28:31.020964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:28:31.021001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:28:31.021018       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:28:31.021055       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:28:31.121005       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:28:31.121031       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:28:31.121168       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a41c39db49f45380d579839f82d520984625d29f4dabaef0381390e6bdf676a] <==
	E0917 00:28:22.635845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:22.635883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:28:22.635646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:28:22.635968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:22.636038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:28:22.636058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:22.636404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:22.636428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:28:22.636582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:28:22.636623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:28:22.636965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:28:23.460819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:23.509027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:23.580561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:23.582654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:23.693685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0917 00:28:26.831507       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:29:01.061353       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:01.061564       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 138da6b8-9faf-407f-8647-78ecb92029f1(kube-system/kindnet-t9sbk) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	E0917 00:29:01.061607       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	I0917 00:29:01.062825       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:38.625075       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	E0917 00:29:38.625173       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 9fe8a312-c296-4c84-9c30-5e578c24e82e(kube-system/kube-proxy-q96zd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	E0917 00:29:38.625194       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	I0917 00:29:38.626798       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	
	
	==> kubelet <==
	Sep 17 00:28:54 ha-671025 kubelet[1668]: E0917 00:28:54.582788    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068934582486457  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:04 ha-671025 kubelet[1668]: E0917 00:29:04.584007    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068944583759061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:04 ha-671025 kubelet[1668]: E0917 00:29:04.584046    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068944583759061  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:14 ha-671025 kubelet[1668]: E0917 00:29:14.585159    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068954584899808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:14 ha-671025 kubelet[1668]: E0917 00:29:14.585207    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068954584899808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:24 ha-671025 kubelet[1668]: E0917 00:29:24.586593    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068964586327984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:24 ha-671025 kubelet[1668]: E0917 00:29:24.586624    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068964586327984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:34 ha-671025 kubelet[1668]: E0917 00:29:34.587985    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068974587766323  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:34 ha-671025 kubelet[1668]: E0917 00:29:34.588046    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068974587766323  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:44 ha-671025 kubelet[1668]: E0917 00:29:44.589297    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068984589063590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:44 ha-671025 kubelet[1668]: E0917 00:29:44.589343    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068984589063590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:54 ha-671025 kubelet[1668]: E0917 00:29:54.592592    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068994591703153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:54 ha-671025 kubelet[1668]: E0917 00:29:54.592634    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068994591703153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:58 ha-671025 kubelet[1668]: I0917 00:29:58.902373    1668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7vc\" (UniqueName: \"kubernetes.io/projected/90adda6e-a8af-41fd-880e-3820a76c660d-kube-api-access-2n7vc\") pod \"busybox-7b57f96db7-wj4r5\" (UID: \"90adda6e-a8af-41fd-880e-3820a76c660d\") " pod="default/busybox-7b57f96db7-wj4r5"
	Sep 17 00:30:02 ha-671025 kubelet[1668]: E0917 00:30:02.515952    1668 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41316->127.0.0.1:37239: write tcp 127.0.0.1:41316->127.0.0.1:37239: write: broken pipe
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594113    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594155    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595504    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595637    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597161    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597200    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598240    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598284    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:44 ha-671025 kubelet[1668]: E0917 00:30:44.600122    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069044599859993  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:44 ha-671025 kubelet[1668]: E0917 00:30:44.600164    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069044599859993  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (16.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (21.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 node stop m02 --alsologtostderr -v 5: (19.146481473s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (550.149483ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:12.454308  611545 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:12.454612  611545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:12.454624  611545 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:12.454629  611545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:12.454821  611545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:12.454993  611545 out.go:368] Setting JSON to false
	I0917 00:31:12.455014  611545 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:12.455190  611545 notify.go:220] Checking for updates...
	I0917 00:31:12.455477  611545 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:12.455521  611545 status.go:174] checking status of ha-671025 ...
	I0917 00:31:12.456066  611545 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:12.477304  611545 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:12.477375  611545 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:12.477689  611545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:12.496459  611545 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:12.496709  611545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:12.496753  611545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:12.515577  611545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:12.610859  611545 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:12.615595  611545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:12.627655  611545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:12.687294  611545 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:31:12.676450453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:12.687839  611545 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:12.687872  611545 api_server.go:166] Checking apiserver status ...
	I0917 00:31:12.687911  611545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:12.700238  611545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:12.710494  611545 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:12.710565  611545 ssh_runner.go:195] Run: ls
	I0917 00:31:12.714581  611545 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:12.718739  611545 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:12.718768  611545 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:12.718780  611545 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:12.718797  611545 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:12.719121  611545 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:12.736902  611545 status.go:371] ha-671025-m02 host status = "Stopped" (err=<nil>)
	I0917 00:31:12.736925  611545 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:12.736932  611545 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:12.736960  611545 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:12.737240  611545 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:12.755471  611545 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:12.755498  611545 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:12.755735  611545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:12.774823  611545 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:12.775105  611545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:12.775144  611545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:12.793777  611545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:12.887890  611545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:12.900663  611545 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:12.900696  611545 api_server.go:166] Checking apiserver status ...
	I0917 00:31:12.900739  611545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:12.912561  611545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:12.925029  611545 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:12.925099  611545 ssh_runner.go:195] Run: ls
	I0917 00:31:12.929534  611545 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:12.933753  611545 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:12.933781  611545 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:12.933792  611545 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:12.933808  611545 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:12.934061  611545 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:12.952883  611545 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:12.952931  611545 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:12.952944  611545 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5": ha-671025
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-671025-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-671025-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-671025-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5": ha-671025
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-671025-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-671025-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-671025-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 591894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:28:07.642349633Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2947b2c900e461fedf4c1b14afccf677c0bbbd5856a737563908fb819f368e69",
	            "SandboxKey": "/var/run/docker/netns/2947b2c900e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:4e:63:a1:43:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "e04f7d855de79c251547e2cb959967e0ee3cd816f6030c7dc40e9731e31f953c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.286465356s)
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m03.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m03_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt                                                            │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ node    │ ha-671025 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:31 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:28:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:28:02.421105  591333 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:28:02.421342  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421350  591333 out.go:374] Setting ErrFile to fd 2...
	I0917 00:28:02.421355  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421569  591333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:28:02.422069  591333 out.go:368] Setting JSON to false
	I0917 00:28:02.422989  591333 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11425,"bootTime":1758057457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:28:02.423098  591333 start.go:140] virtualization: kvm guest
	I0917 00:28:02.425200  591333 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:28:02.426666  591333 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:28:02.426650  591333 notify.go:220] Checking for updates...
	I0917 00:28:02.429221  591333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:28:02.430609  591333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:02.431832  591333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:28:02.433241  591333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:28:02.434707  591333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:28:02.436048  591333 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:28:02.460585  591333 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:28:02.460765  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.517630  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.506821705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.517750  591333 docker.go:318] overlay module found
	I0917 00:28:02.519568  591333 out.go:179] * Using the docker driver based on user configuration
	I0917 00:28:02.520915  591333 start.go:304] selected driver: docker
	I0917 00:28:02.520935  591333 start.go:918] validating driver "docker" against <nil>
	I0917 00:28:02.520951  591333 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:28:02.521682  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.578543  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.56897484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.578724  591333 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:28:02.578937  591333 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:28:02.580907  591333 out.go:179] * Using Docker driver with root privileges
	I0917 00:28:02.582377  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:02.582477  591333 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 00:28:02.582493  591333 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 00:28:02.582574  591333 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:02.583947  591333 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:28:02.585129  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:02.586454  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:02.587786  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:02.587830  591333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:28:02.587838  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:02.587843  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:02.587944  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:02.587958  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:02.588350  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:02.588379  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json: {Name:mk091aa75e831ff22299b49a9817446c9f212399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:02.609265  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:02.609287  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:02.609305  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:02.609329  591333 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:02.609454  591333 start.go:364] duration metric: took 102.584µs to acquireMachinesLock for "ha-671025"
	I0917 00:28:02.609482  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:02.609540  591333 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:28:02.611610  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:02.611847  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:02.611880  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:02.611969  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:02.612007  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612019  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612089  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:02.612110  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612122  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612504  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:28:02.630138  591333 cli_runner.go:211] docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:28:02.630214  591333 network_create.go:284] running [docker network inspect ha-671025] to gather additional debugging logs...
	I0917 00:28:02.630235  591333 cli_runner.go:164] Run: docker network inspect ha-671025
	W0917 00:28:02.647610  591333 cli_runner.go:211] docker network inspect ha-671025 returned with exit code 1
	I0917 00:28:02.647648  591333 network_create.go:287] error running [docker network inspect ha-671025]: docker network inspect ha-671025: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-671025 not found
	I0917 00:28:02.647665  591333 network_create.go:289] output of [docker network inspect ha-671025]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-671025 not found
	
	** /stderr **
	I0917 00:28:02.647783  591333 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:02.666874  591333 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014926f0}
	I0917 00:28:02.666937  591333 network_create.go:124] attempt to create docker network ha-671025 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 00:28:02.666993  591333 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-671025 ha-671025
	I0917 00:28:02.726570  591333 network_create.go:108] docker network ha-671025 192.168.49.0/24 created
	I0917 00:28:02.726603  591333 kic.go:121] calculated static IP "192.168.49.2" for the "ha-671025" container
	I0917 00:28:02.726684  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:02.744335  591333 cli_runner.go:164] Run: docker volume create ha-671025 --label name.minikube.sigs.k8s.io=ha-671025 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:02.765618  591333 oci.go:103] Successfully created a docker volume ha-671025
	I0917 00:28:02.765710  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --entrypoint /usr/bin/test -v ha-671025:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:03.152134  591333 oci.go:107] Successfully prepared a docker volume ha-671025
	I0917 00:28:03.152201  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:03.152229  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:03.152307  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:07.519336  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.366963199s)
	I0917 00:28:07.519373  591333 kic.go:203] duration metric: took 4.3671415s to extract preloaded images to volume ...
	W0917 00:28:07.519497  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:07.519557  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:07.519606  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:07.583258  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025 --name ha-671025 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025 --network ha-671025 --ip 192.168.49.2 --volume ha-671025:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:07.861983  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Running}}
	I0917 00:28:07.881740  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:07.902486  591333 cli_runner.go:164] Run: docker exec ha-671025 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:07.957445  591333 oci.go:144] the created container "ha-671025" has a running status.
	I0917 00:28:07.957491  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa...
	I0917 00:28:07.970221  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:07.970277  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:07.996810  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.018618  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:08.018648  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:08.065859  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.088307  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:08.088464  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:08.112791  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:08.113142  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:08.113159  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:08.114236  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41092->127.0.0.1:33148: read: connection reset by peer
	I0917 00:28:11.250841  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.250869  591333 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:28:11.250946  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.270326  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.270573  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.270589  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:28:11.422194  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.422282  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.441086  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.441373  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.441412  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:11.579534  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:11.579570  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:11.579606  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:11.579621  591333 provision.go:84] configureAuth start
	I0917 00:28:11.579696  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:11.598338  591333 provision.go:143] copyHostCerts
	I0917 00:28:11.598381  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598438  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:11.598450  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598528  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:11.598637  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598660  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:11.598668  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598709  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:11.598793  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598818  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:11.598827  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598863  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:11.598936  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:28:11.692056  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:11.692126  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:11.692177  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.710836  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:11.809661  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:11.809738  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:11.838472  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:11.838547  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:28:11.864972  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:11.865064  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:11.892502  591333 provision.go:87] duration metric: took 312.863604ms to configureAuth
	I0917 00:28:11.892539  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:11.892749  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:11.892876  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.911894  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.912108  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.912123  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:12.156893  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:12.156918  591333 machine.go:96] duration metric: took 4.068577091s to provisionDockerMachine
	I0917 00:28:12.156929  591333 client.go:171] duration metric: took 9.545042483s to LocalClient.Create
	I0917 00:28:12.156950  591333 start.go:167] duration metric: took 9.54510971s to libmachine.API.Create "ha-671025"
	I0917 00:28:12.156957  591333 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:28:12.156965  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:12.157043  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:12.157079  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.175648  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.275414  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:12.279194  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:12.279224  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:12.279231  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:12.279238  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:12.279255  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:12.279317  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:12.279416  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:12.279430  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:12.279530  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:12.288873  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:12.317418  591333 start.go:296] duration metric: took 160.444141ms for postStartSetup
	I0917 00:28:12.317811  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.336261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:12.336565  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:12.336607  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.354705  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.446983  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:12.451593  591333 start.go:128] duration metric: took 9.842036225s to createHost
	I0917 00:28:12.451634  591333 start.go:83] releasing machines lock for "ha-671025", held for 9.842165682s
	I0917 00:28:12.451714  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.469798  591333 ssh_runner.go:195] Run: cat /version.json
	I0917 00:28:12.469852  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.469869  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:12.469931  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.489508  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.489501  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.581676  591333 ssh_runner.go:195] Run: systemctl --version
	I0917 00:28:12.654927  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:12.796661  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:12.802016  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.827191  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:12.827278  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.858197  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:12.858222  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:12.858256  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:12.858306  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:12.874462  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:12.887158  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:12.887226  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:12.902417  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:12.917174  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:12.986628  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:13.060583  591333 docker.go:234] disabling docker service ...
	I0917 00:28:13.060656  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:13.081466  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:13.094012  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:13.164943  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:13.315404  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:13.328708  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:13.347694  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:13.347757  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.361221  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:13.361294  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.371972  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.382985  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.394505  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:13.405096  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.416205  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.434282  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.445654  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:13.454948  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:13.464245  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:13.526087  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:13.629597  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:13.629677  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:13.634535  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:13.634599  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:13.639122  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:13.675949  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:13.676043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.713216  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.752386  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:13.753755  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:13.771156  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:13.775524  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:13.788890  591333 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:28:13.789115  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:13.789184  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.863780  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.863811  591333 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:28:13.863873  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.900999  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.901021  591333 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:28:13.901028  591333 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:28:13.901149  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:13.901218  591333 ssh_runner.go:195] Run: crio config
	I0917 00:28:13.947330  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:13.947354  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:13.947367  591333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:28:13.947398  591333 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:28:13.947540  591333 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:28:13.947571  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:13.947618  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:13.962176  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:13.962288  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:13.962356  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:13.972318  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:13.972409  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:28:13.982775  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:28:14.003185  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:14.025114  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:28:14.043893  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0917 00:28:14.063914  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:14.067851  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:14.079495  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:14.146352  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:14.170001  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:28:14.170029  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:14.170049  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.170209  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:14.170248  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:14.170258  591333 certs.go:256] generating profile certs ...
	I0917 00:28:14.170312  591333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:14.170334  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt with IP's: []
	I0917 00:28:14.258881  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt ...
	I0917 00:28:14.258912  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt: {Name:mkf356a325e81df463620a9a59f1e19636a8bbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259129  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key ...
	I0917 00:28:14.259150  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key: {Name:mka2338ec2b6b28954ea0ef14eeb3d06111be43d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259268  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444
	I0917 00:28:14.259285  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0917 00:28:14.420479  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 ...
	I0917 00:28:14.420509  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444: {Name:mkcf98c32344d33f146459467ae0b529b09930e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420720  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 ...
	I0917 00:28:14.420744  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444: {Name:mk2a9dddb825d571b4beb46eeddb7582f0b5a38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420868  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:14.420963  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:14.421066  591333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:14.421086  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt with IP's: []
	I0917 00:28:14.667928  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt ...
	I0917 00:28:14.667965  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt: {Name:mk8fc3d9cf0ef31fe8163e3202ec93ff4212c0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668186  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key ...
	I0917 00:28:14.668205  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key: {Name:mk4aadb37423b11008cecd193572dcb26f4156f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668320  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:14.668341  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:14.668351  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:14.668364  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:14.668375  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:14.668386  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:14.668408  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:14.668420  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:14.668487  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:14.668524  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:14.668533  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:14.668554  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:14.668631  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:14.668666  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:14.668710  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:14.668747  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:14.668764  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:14.668780  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.669300  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:14.695942  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:14.721853  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:14.746954  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:14.773182  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:28:14.798782  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:14.823720  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:14.847907  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:14.872531  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:14.900554  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:14.925365  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:14.953903  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:28:14.973565  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:14.979257  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:14.989070  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992786  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992847  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.999827  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:15.009762  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:15.019180  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022635  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022690  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.029591  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:15.039107  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:15.048628  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052181  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052230  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.058893  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:15.069771  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:15.073670  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:15.073738  591333 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:15.073818  591333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:28:15.073904  591333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:28:15.110504  591333 cri.go:89] found id: ""
	I0917 00:28:15.110589  591333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:28:15.119903  591333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:28:15.129328  591333 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:28:15.129384  591333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:28:15.138492  591333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:28:15.138510  591333 kubeadm.go:157] found existing configuration files:
	
	I0917 00:28:15.138563  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:28:15.147903  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:28:15.147969  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:28:15.157062  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:28:15.166583  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:28:15.166646  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:28:15.176378  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.185922  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:28:15.185988  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.195234  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:28:15.204565  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:28:15.204624  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:28:15.213513  591333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:28:15.268809  591333 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 00:28:15.322273  591333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:28:25.344526  591333 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:28:25.344586  591333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:28:25.344654  591333 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:28:25.344699  591333 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 00:28:25.344758  591333 kubeadm.go:310] OS: Linux
	I0917 00:28:25.344813  591333 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:28:25.344864  591333 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:28:25.344910  591333 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:28:25.344953  591333 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:28:25.345000  591333 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:28:25.345048  591333 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:28:25.345119  591333 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:28:25.345192  591333 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 00:28:25.345263  591333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:28:25.345346  591333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:28:25.345452  591333 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:28:25.345508  591333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:28:25.347069  591333 out.go:252]   - Generating certificates and keys ...
	I0917 00:28:25.347143  591333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:28:25.347233  591333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:28:25.347311  591333 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:28:25.347369  591333 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:28:25.347468  591333 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:28:25.347518  591333 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:28:25.347562  591333 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:28:25.347663  591333 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.347707  591333 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:28:25.347846  591333 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.348037  591333 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:28:25.348142  591333 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:28:25.348209  591333 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:28:25.348278  591333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:28:25.348323  591333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:28:25.348380  591333 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:28:25.348445  591333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:28:25.348531  591333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:28:25.348623  591333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:28:25.348735  591333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:28:25.348831  591333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:28:25.351075  591333 out.go:252]   - Booting up control plane ...
	I0917 00:28:25.351182  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:28:25.351283  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:28:25.351361  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:28:25.351548  591333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:28:25.351700  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:28:25.351849  591333 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:28:25.351934  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:28:25.351970  591333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:28:25.352082  591333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:28:25.352189  591333 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:28:25.352283  591333 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00103693s
	I0917 00:28:25.352386  591333 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:28:25.352498  591333 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0917 00:28:25.352576  591333 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:28:25.352659  591333 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:28:25.352745  591333 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.008701955s
	I0917 00:28:25.352807  591333 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.208053254s
	I0917 00:28:25.352891  591333 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.501882009s
	I0917 00:28:25.352984  591333 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:28:25.353099  591333 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:28:25.353159  591333 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:28:25.353326  591333 kubeadm.go:310] [mark-control-plane] Marking the node ha-671025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:28:25.353381  591333 kubeadm.go:310] [bootstrap-token] Using token: 945t58.lx3tewj0v31y7u2l
	I0917 00:28:25.354623  591333 out.go:252]   - Configuring RBAC rules ...
	I0917 00:28:25.354715  591333 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:28:25.354845  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:28:25.355014  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:28:25.355187  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:28:25.355345  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:28:25.355454  591333 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:28:25.355574  591333 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:28:25.355621  591333 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:28:25.355662  591333 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:28:25.355668  591333 kubeadm.go:310] 
	I0917 00:28:25.355718  591333 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:28:25.355727  591333 kubeadm.go:310] 
	I0917 00:28:25.355804  591333 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:28:25.355810  591333 kubeadm.go:310] 
	I0917 00:28:25.355831  591333 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:28:25.355911  591333 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:28:25.355972  591333 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:28:25.355979  591333 kubeadm.go:310] 
	I0917 00:28:25.356051  591333 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:28:25.356065  591333 kubeadm.go:310] 
	I0917 00:28:25.356135  591333 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:28:25.356143  591333 kubeadm.go:310] 
	I0917 00:28:25.356220  591333 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:28:25.356331  591333 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:28:25.356455  591333 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:28:25.356470  591333 kubeadm.go:310] 
	I0917 00:28:25.356549  591333 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:28:25.356635  591333 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:28:25.356643  591333 kubeadm.go:310] 
	I0917 00:28:25.356717  591333 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.356829  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 00:28:25.356858  591333 kubeadm.go:310] 	--control-plane 
	I0917 00:28:25.356865  591333 kubeadm.go:310] 
	I0917 00:28:25.356941  591333 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:28:25.356947  591333 kubeadm.go:310] 
	I0917 00:28:25.357048  591333 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.357188  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 00:28:25.357207  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:25.357216  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:25.358901  591333 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 00:28:25.360097  591333 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 00:28:25.364931  591333 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 00:28:25.364953  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 00:28:25.387094  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 00:28:25.613643  591333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:28:25.613728  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:25.613746  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025 minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=true
	I0917 00:28:25.624073  591333 ops.go:34] apiserver oom_adj: -16
	I0917 00:28:25.696361  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.196672  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.696850  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.197218  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.696539  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.196491  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.696543  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.196814  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.696595  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.196581  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.273337  591333 kubeadm.go:1105] duration metric: took 4.659672583s to wait for elevateKubeSystemPrivileges
	I0917 00:28:30.273483  591333 kubeadm.go:394] duration metric: took 15.19974193s to StartCluster
	I0917 00:28:30.273523  591333 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.273607  591333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:30.274607  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.274913  591333 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.274945  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:28:30.274948  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:28:30.274965  591333 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:28:30.275045  591333 addons.go:69] Setting storage-provisioner=true in profile "ha-671025"
	I0917 00:28:30.275085  591333 addons.go:238] Setting addon storage-provisioner=true in "ha-671025"
	I0917 00:28:30.275129  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.275048  591333 addons.go:69] Setting default-storageclass=true in profile "ha-671025"
	I0917 00:28:30.275164  591333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-671025"
	I0917 00:28:30.275205  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.275523  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.275665  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.298018  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:28:30.298668  591333 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:28:30.298695  591333 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:28:30.298702  591333 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:28:30.298708  591333 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:28:30.298714  591333 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:28:30.298802  591333 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:28:30.299193  591333 addons.go:238] Setting addon default-storageclass=true in "ha-671025"
	I0917 00:28:30.299247  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.299354  591333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:28:30.299784  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.300585  591333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.300605  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:28:30.300669  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.319752  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.321070  591333 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.321101  591333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:28:30.321165  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.347717  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.362789  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:28:30.443108  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.467358  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.541692  591333 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 00:28:30.788755  591333 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:28:30.790283  591333 addons.go:514] duration metric: took 515.302961ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:28:30.790337  591333 start.go:246] waiting for cluster config update ...
	I0917 00:28:30.790355  591333 start.go:255] writing updated cluster config ...
	I0917 00:28:30.792167  591333 out.go:203] 
	I0917 00:28:30.794434  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.794553  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.797029  591333 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:28:30.798740  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:30.800340  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:30.801532  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:30.801576  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:30.801656  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:30.801701  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:30.801721  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:30.801837  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.826923  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:30.826950  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:30.826970  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:30.827006  591333 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:30.827168  591333 start.go:364] duration metric: took 135.604µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:28:30.827198  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.827285  591333 start.go:125] createHost starting for "m02" (driver="docker")
	I0917 00:28:30.829869  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:30.830019  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:30.830056  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:30.830117  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:30.830162  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830180  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830241  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:30.830266  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830274  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830527  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:30.850687  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc0018d10b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:28:30.850727  591333 kic.go:121] calculated static IP "192.168.49.3" for the "ha-671025-m02" container
	I0917 00:28:30.850801  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:30.869737  591333 cli_runner.go:164] Run: docker volume create ha-671025-m02 --label name.minikube.sigs.k8s.io=ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:30.890468  591333 oci.go:103] Successfully created a docker volume ha-671025-m02
	I0917 00:28:30.890596  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --entrypoint /usr/bin/test -v ha-671025-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:31.278702  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m02
	I0917 00:28:31.278750  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:31.278777  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:31.278882  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:35.682273  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.403350864s)
	I0917 00:28:35.682311  591333 kic.go:203] duration metric: took 4.403531688s to extract preloaded images to volume ...
	W0917 00:28:35.682411  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:35.682448  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:35.682488  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:35.742164  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m02 --name ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m02 --network ha-671025 --ip 192.168.49.3 --volume ha-671025-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:36.033045  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Running}}
	I0917 00:28:36.053351  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.072949  591333 cli_runner.go:164] Run: docker exec ha-671025-m02 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:36.126815  591333 oci.go:144] the created container "ha-671025-m02" has a running status.
	I0917 00:28:36.126844  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa...
	I0917 00:28:36.161749  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:36.161792  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:36.189714  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.212082  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:36.212109  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:36.260306  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.282829  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:36.282954  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:36.312073  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:36.312435  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:36.312461  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:36.313226  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47290->127.0.0.1:33153: read: connection reset by peer
	I0917 00:28:39.452508  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.452557  591333 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:28:39.452652  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.472236  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.472561  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.472581  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:28:39.626427  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.626517  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.645919  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.646146  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.646163  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:39.786717  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:39.786756  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:39.786781  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:39.786798  591333 provision.go:84] configureAuth start
	I0917 00:28:39.786974  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:39.807773  591333 provision.go:143] copyHostCerts
	I0917 00:28:39.807815  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807847  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:39.807858  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807932  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:39.808029  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808050  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:39.808054  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808081  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:39.808149  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808167  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:39.808172  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808200  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:39.808255  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:28:39.918454  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:39.918537  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:39.918589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.937978  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.039160  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:40.039233  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:40.069797  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:40.069887  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:28:40.098311  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:40.098408  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:40.127419  591333 provision.go:87] duration metric: took 340.575644ms to configureAuth
	I0917 00:28:40.127458  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:40.127656  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:40.127785  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.147026  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:40.147308  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:40.147331  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:40.409609  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:40.409640  591333 machine.go:96] duration metric: took 4.1267811s to provisionDockerMachine
	I0917 00:28:40.409651  591333 client.go:171] duration metric: took 9.579589798s to LocalClient.Create
	I0917 00:28:40.409674  591333 start.go:167] duration metric: took 9.579655281s to libmachine.API.Create "ha-671025"
	I0917 00:28:40.409684  591333 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:28:40.409696  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:40.409769  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:40.409816  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.431881  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.535836  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:40.540091  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:40.540127  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:40.540134  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:40.540141  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:40.540153  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:40.540203  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:40.540294  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:40.540310  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:40.540600  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:40.551220  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:40.582236  591333 start.go:296] duration metric: took 172.533526ms for postStartSetup
	I0917 00:28:40.582728  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.602550  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:40.602895  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:40.602973  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.625331  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.720887  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:40.725796  591333 start.go:128] duration metric: took 9.898487722s to createHost
	I0917 00:28:40.725827  591333 start.go:83] releasing machines lock for "ha-671025-m02", held for 9.89864483s
	I0917 00:28:40.725898  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.749075  591333 out.go:179] * Found network options:
	I0917 00:28:40.750936  591333 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:28:40.752439  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:28:40.752503  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:28:40.752575  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:40.752624  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.752703  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:40.752776  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.774163  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.775400  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:41.009369  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:41.014989  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.040280  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:41.040373  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.077837  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:41.077864  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:41.077899  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:41.077939  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:41.098363  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:41.112692  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:41.112768  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:41.128481  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:41.145954  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:41.216259  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:41.293618  591333 docker.go:234] disabling docker service ...
	I0917 00:28:41.293683  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:41.314463  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:41.327805  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:41.402097  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:41.515728  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:41.528751  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:41.548638  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:41.548717  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.563770  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:41.563842  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.575236  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.586559  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.599824  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:41.612614  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.624744  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.645749  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.659897  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:41.670457  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:41.680684  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:41.816654  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:41.923179  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:41.923241  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:41.927246  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:41.927309  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:41.931155  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:41.970363  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:41.970470  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.009043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.057831  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:42.059352  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:28:42.061008  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:42.081413  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:42.086716  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:42.100745  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:28:42.100976  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:42.101278  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:42.124810  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:42.125292  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:28:42.125333  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:42.125361  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:42.125545  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:42.125614  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:42.125626  591333 certs.go:256] generating profile certs ...
	I0917 00:28:42.125787  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:42.125831  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c
	I0917 00:28:42.125848  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:28:43.131520  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c ...
	I0917 00:28:43.131559  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c: {Name:mk97bbbbe985039a36a56311ec983801d49afc24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131793  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c ...
	I0917 00:28:43.131814  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c: {Name:mk2a126624b47a1fbca817c2bf7b065e9ee5a854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131938  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:43.132097  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:43.132233  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:43.132252  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:43.132265  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:43.132275  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:43.132286  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:43.132296  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:43.132308  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:43.132318  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:43.132330  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:43.132385  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:43.132425  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:43.132435  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:43.132458  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:43.132480  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:43.132500  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:43.132536  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:43.132561  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.132576  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.132588  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.132646  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:43.152207  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:43.242834  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:28:43.247724  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:28:43.261684  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:28:43.265651  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:28:43.279426  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:28:43.283200  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:28:43.298316  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:28:43.302656  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:28:43.316567  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:28:43.320915  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:28:43.334735  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:28:43.339251  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:28:43.354686  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:43.382622  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:43.411140  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:43.439208  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:43.468797  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 00:28:43.497239  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:43.525628  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:43.552854  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:43.579567  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:43.613480  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:43.640927  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:43.668098  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:28:43.688016  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:28:43.709638  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:28:43.729987  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:28:43.751570  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:28:43.772873  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:28:43.793231  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:28:43.813996  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:43.820372  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:43.831827  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836450  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836601  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.845799  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:43.858335  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:43.870361  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874499  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874557  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.882167  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:43.894006  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:43.906727  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910868  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910926  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.918600  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:43.930014  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:43.933717  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:43.933786  591333 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:28:43.933892  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:43.933920  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:43.933956  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:43.949251  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:43.949348  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:43.949436  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:43.959785  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:43.959858  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:28:43.970815  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:28:43.992525  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:44.016479  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:28:44.038080  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:44.042531  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:44.055802  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:44.123804  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:44.146604  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:44.146887  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:44.146991  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:28:44.147052  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:44.166636  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:44.318607  591333 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:44.318654  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0917 00:29:01.319807  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.001126344s)
	I0917 00:29:01.319840  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:01.532514  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m02 minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:01.623743  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:01.704118  591333 start.go:319] duration metric: took 17.557224287s to joinCluster
	I0917 00:29:01.704207  591333 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:01.704539  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:01.705687  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:01.707014  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:01.810630  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:01.824161  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:01.824231  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:01.824550  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	W0917 00:29:03.828446  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:05.829871  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:08.329045  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:10.828964  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:13.328972  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:15.828569  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	I0917 00:29:16.328859  591333 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:29:16.328891  591333 node_ready.go:38] duration metric: took 14.504319776s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:29:16.328908  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:16.328959  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:16.341005  591333 api_server.go:72] duration metric: took 14.636761134s to wait for apiserver process to appear ...
	I0917 00:29:16.341029  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:16.341048  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:16.345248  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:16.346148  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:16.346174  591333 api_server.go:131] duration metric: took 5.137742ms to wait for apiserver health ...
	I0917 00:29:16.346183  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:16.351147  591333 system_pods.go:59] 17 kube-system pods found
	I0917 00:29:16.351175  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.351180  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.351184  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.351187  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.351190  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.351194  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.351198  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.351203  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.351206  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.351210  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.351213  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.351216  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.351219  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.351222  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.351225  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.351227  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.351230  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.351235  591333 system_pods.go:74] duration metric: took 5.047428ms to wait for pod list to return data ...
	I0917 00:29:16.351245  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:16.354087  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:16.354107  591333 default_sa.go:55] duration metric: took 2.857135ms for default service account to be created ...
	I0917 00:29:16.354115  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:16.357519  591333 system_pods.go:86] 17 kube-system pods found
	I0917 00:29:16.357544  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.357550  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.357555  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.357560  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.357565  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.357570  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.357576  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.357582  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.357591  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.357599  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.357605  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.357611  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.357614  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.357619  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.357623  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.357630  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.357633  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.357642  591333 system_pods.go:126] duration metric: took 3.522377ms to wait for k8s-apps to be running ...
	I0917 00:29:16.357652  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:16.357710  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:16.370259  591333 system_svc.go:56] duration metric: took 12.594604ms WaitForService to wait for kubelet
	I0917 00:29:16.370292  591333 kubeadm.go:578] duration metric: took 14.666051199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:16.370351  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:16.373484  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373509  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373526  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373531  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373545  591333 node_conditions.go:105] duration metric: took 3.187263ms to run NodePressure ...
	I0917 00:29:16.373563  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:16.373599  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:16.375540  591333 out.go:203] 
	I0917 00:29:16.376982  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:16.377123  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.378689  591333 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:29:16.380127  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:29:16.381271  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:29:16.382178  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.382203  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:29:16.382278  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:29:16.382305  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:29:16.382314  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:29:16.382434  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.405280  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:29:16.405301  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:29:16.405319  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:29:16.405349  591333 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:29:16.405476  591333 start.go:364] duration metric: took 109.564µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:29:16.405502  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:16.405601  591333 start.go:125] createHost starting for "m03" (driver="docker")
	I0917 00:29:16.408212  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:29:16.408326  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:29:16.408364  591333 client.go:168] LocalClient.Create starting
	I0917 00:29:16.408459  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:29:16.408501  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408515  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408569  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:29:16.408588  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408596  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408797  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:16.428129  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc001a2abd0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:29:16.428169  591333 kic.go:121] calculated static IP "192.168.49.4" for the "ha-671025-m03" container
	I0917 00:29:16.428233  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:29:16.447362  591333 cli_runner.go:164] Run: docker volume create ha-671025-m03 --label name.minikube.sigs.k8s.io=ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:29:16.467514  591333 oci.go:103] Successfully created a docker volume ha-671025-m03
	I0917 00:29:16.467629  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --entrypoint /usr/bin/test -v ha-671025-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:29:16.870641  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m03
	I0917 00:29:16.870686  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.870713  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:29:16.870789  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:29:21.201351  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.33049988s)
	I0917 00:29:21.201386  591333 kic.go:203] duration metric: took 4.330670212s to extract preloaded images to volume ...
	W0917 00:29:21.201499  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:29:21.201529  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:29:21.201570  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:29:21.257059  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m03 --name ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m03 --network ha-671025 --ip 192.168.49.4 --volume ha-671025-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:29:21.526231  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Running}}
	I0917 00:29:21.546352  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.567256  591333 cli_runner.go:164] Run: docker exec ha-671025-m03 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:29:21.619083  591333 oci.go:144] the created container "ha-671025-m03" has a running status.
	I0917 00:29:21.619117  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa...
	I0917 00:29:21.831158  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:29:21.831204  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:29:21.864081  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.886560  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:29:21.886587  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:29:21.939241  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.960815  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:29:21.961005  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:21.982259  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:21.982549  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:21.982571  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:29:22.123516  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.123558  591333 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:29:22.123633  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.143852  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.144070  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.144083  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:29:22.298146  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.298229  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.317607  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.317851  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.317875  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:29:22.455839  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:29:22.455874  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:29:22.455894  591333 ubuntu.go:190] setting up certificates
	I0917 00:29:22.455908  591333 provision.go:84] configureAuth start
	I0917 00:29:22.455983  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:22.474745  591333 provision.go:143] copyHostCerts
	I0917 00:29:22.474791  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474821  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:29:22.474830  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474900  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:29:22.474988  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475015  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:29:22.475028  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475061  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:29:22.475116  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475134  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:29:22.475141  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475164  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:29:22.475216  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:29:22.562518  591333 provision.go:177] copyRemoteCerts
	I0917 00:29:22.562597  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:29:22.562645  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.582491  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:22.681516  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:29:22.681585  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:29:22.711977  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:29:22.712070  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:29:22.739378  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:29:22.739454  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:29:22.767225  591333 provision.go:87] duration metric: took 311.299307ms to configureAuth
	I0917 00:29:22.767254  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:29:22.767513  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:22.767641  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.787106  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.787322  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.787337  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:29:23.027585  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:29:23.027618  591333 machine.go:96] duration metric: took 1.066782115s to provisionDockerMachine
	I0917 00:29:23.027629  591333 client.go:171] duration metric: took 6.619257203s to LocalClient.Create
	I0917 00:29:23.027644  591333 start.go:167] duration metric: took 6.619319411s to libmachine.API.Create "ha-671025"
	I0917 00:29:23.027653  591333 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:29:23.027665  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:29:23.027739  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:29:23.027789  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.048535  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.148623  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:29:23.152295  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:29:23.152333  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:29:23.152344  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:29:23.152354  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:29:23.152402  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:29:23.152478  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:29:23.152577  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:29:23.152589  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:29:23.152698  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:29:23.162366  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:23.192510  591333 start.go:296] duration metric: took 164.839418ms for postStartSetup
	I0917 00:29:23.192875  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.211261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:23.211545  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:29:23.211589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.228367  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.323873  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:29:23.328453  591333 start.go:128] duration metric: took 6.922836798s to createHost
	I0917 00:29:23.328480  591333 start.go:83] releasing machines lock for "ha-671025-m03", held for 6.9229927s
	I0917 00:29:23.328559  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.348699  591333 out.go:179] * Found network options:
	I0917 00:29:23.350091  591333 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:29:23.351262  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351286  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351307  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351319  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:29:23.351413  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:29:23.351457  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.351483  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:29:23.351555  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.370656  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.370963  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.603202  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:29:23.608556  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.632987  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:29:23.633078  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.665413  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:29:23.665445  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:29:23.665479  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:29:23.665582  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:29:23.682513  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:29:23.695198  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:29:23.695265  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:29:23.710235  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:29:23.725450  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:29:23.796030  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:29:23.870255  591333 docker.go:234] disabling docker service ...
	I0917 00:29:23.870317  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:29:23.889003  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:29:23.901613  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:29:23.973987  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:29:24.138099  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:29:24.150712  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:29:24.168641  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:29:24.168702  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.181874  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:29:24.181936  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.193571  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.204646  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.215806  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:29:24.225866  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.236708  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.254758  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.266984  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:29:24.276695  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:29:24.286587  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:24.356850  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:29:24.461065  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:29:24.461156  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:29:24.465833  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:29:24.465903  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:29:24.469817  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:29:24.506319  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:29:24.506419  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.544050  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.583372  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:29:24.584727  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:29:24.586235  591333 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:29:24.587662  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:24.605611  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:29:24.610151  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:24.622865  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:29:24.623090  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:24.623289  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:29:24.641474  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:24.641732  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:29:24.641743  591333 certs.go:194] generating shared ca certs ...
	I0917 00:29:24.641758  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.641894  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:29:24.641944  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:29:24.641954  591333 certs.go:256] generating profile certs ...
	I0917 00:29:24.642025  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:29:24.642065  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:29:24.642081  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:29:24.856212  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 ...
	I0917 00:29:24.856249  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7: {Name:mk65d29cf7ba29b99ab2056d134a2884f928fccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856490  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 ...
	I0917 00:29:24.856512  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7: {Name:mkd89da6d4d9fb3421e5c7677b39452bd32f11a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856628  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:29:24.856803  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:29:24.856940  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:29:24.856957  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:29:24.856970  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:29:24.856984  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:29:24.857022  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:29:24.857038  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:29:24.857051  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:29:24.857063  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:29:24.857073  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:29:24.857137  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:29:24.857169  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:29:24.857179  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:29:24.857203  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:29:24.857236  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:29:24.857259  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:29:24.857298  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:24.857323  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:24.857336  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:29:24.857410  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:29:24.857487  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:24.876681  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:24.965759  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:29:24.970077  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:29:24.983505  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:29:24.987459  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:29:25.001249  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:29:25.005139  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:29:25.019000  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:29:25.023277  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:29:25.037665  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:29:25.041486  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:29:25.056004  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:29:25.060379  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:29:25.075527  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:29:25.103048  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:29:25.130436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:29:25.156335  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:29:25.183962  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0917 00:29:25.210290  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:29:25.237850  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:29:25.264713  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:29:25.292266  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:29:25.322436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:29:25.349159  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:29:25.376714  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:29:25.397066  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:29:25.416141  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:29:25.436031  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:29:25.455195  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:29:25.475694  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:29:25.494981  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:29:25.514182  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:29:25.519757  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:29:25.530366  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534300  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534372  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.541463  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:29:25.551798  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:29:25.562696  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566820  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566898  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.575288  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:29:25.585578  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:29:25.596219  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.599949  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.600000  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.608220  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:29:25.620163  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:29:25.623987  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:29:25.624048  591333 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:29:25.624137  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:29:25.624164  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:29:25.624201  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:29:25.637994  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:29:25.638073  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:29:25.638135  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:29:25.647722  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:29:25.647792  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:29:25.658193  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:29:25.679949  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:29:25.703178  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:29:25.726279  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:29:25.730482  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:25.743251  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:25.813167  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:25.837618  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:25.837905  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:29:25.838070  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:29:25.838130  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:25.859495  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:26.008672  591333 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:26.008736  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0917 00:29:38.691373  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (12.682606276s)
	I0917 00:29:38.691443  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:38.941535  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m03 minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:39.021358  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:39.107652  591333 start.go:319] duration metric: took 13.269740721s to joinCluster
	I0917 00:29:39.107734  591333 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:39.108038  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:39.109032  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:39.110170  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:39.212840  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:39.228175  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:39.228249  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:39.228513  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	W0917 00:29:41.232779  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:43.732906  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:46.232976  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:48.732961  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:51.232362  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	I0917 00:29:51.732347  591333 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:29:51.732379  591333 node_ready.go:38] duration metric: took 12.503848437s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:29:51.732413  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:51.732477  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:51.745126  591333 api_server.go:72] duration metric: took 12.637355364s to wait for apiserver process to appear ...
	I0917 00:29:51.745157  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:51.745182  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:51.751075  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:51.752025  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:51.752049  591333 api_server.go:131] duration metric: took 6.885054ms to wait for apiserver health ...
	I0917 00:29:51.752060  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:51.758905  591333 system_pods.go:59] 24 kube-system pods found
	I0917 00:29:51.758940  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.758949  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.758957  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.758963  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.758968  591333 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.758973  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.758978  591333 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.758990  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.758995  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.759000  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.759004  591333 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.759009  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.759018  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.759023  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.759027  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.759035  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.759039  591333 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.759049  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.759054  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.759058  591333 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.759066  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.759070  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.759075  591333 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.759079  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.759086  591333 system_pods.go:74] duration metric: took 7.019861ms to wait for pod list to return data ...
	I0917 00:29:51.759106  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:51.761820  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:51.761841  591333 default_sa.go:55] duration metric: took 2.726063ms for default service account to be created ...
	I0917 00:29:51.761850  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:51.766999  591333 system_pods.go:86] 24 kube-system pods found
	I0917 00:29:51.767031  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.767037  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.767041  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.767044  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.767047  591333 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.767050  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.767053  591333 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.767057  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.767060  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.767062  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.767066  591333 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.767069  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.767072  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.767075  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.767078  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.767081  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.767084  591333 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.767087  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.767089  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.767093  591333 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.767095  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.767099  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.767105  591333 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.767108  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.767115  591333 system_pods.go:126] duration metric: took 5.259145ms to wait for k8s-apps to be running ...
	I0917 00:29:51.767125  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:51.767173  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:51.780761  591333 system_svc.go:56] duration metric: took 13.623242ms WaitForService to wait for kubelet
	I0917 00:29:51.780795  591333 kubeadm.go:578] duration metric: took 12.673026165s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:51.780819  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:51.783987  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784014  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784059  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784065  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784075  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784081  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784090  591333 node_conditions.go:105] duration metric: took 3.264516ms to run NodePressure ...
	I0917 00:29:51.784106  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:51.784138  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:51.784529  591333 ssh_runner.go:195] Run: rm -f paused
	I0917 00:29:51.788748  591333 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:51.789284  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:29:51.792784  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.797966  591333 pod_ready.go:94] pod "coredns-66bc5c9577-mqh24" is "Ready"
	I0917 00:29:51.797991  591333 pod_ready.go:86] duration metric: took 5.183268ms for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.798004  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.802611  591333 pod_ready.go:94] pod "coredns-66bc5c9577-vfj56" is "Ready"
	I0917 00:29:51.802634  591333 pod_ready.go:86] duration metric: took 4.623535ms for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.805006  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809379  591333 pod_ready.go:94] pod "etcd-ha-671025" is "Ready"
	I0917 00:29:51.809416  591333 pod_ready.go:86] duration metric: took 4.389649ms for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809427  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813691  591333 pod_ready.go:94] pod "etcd-ha-671025-m02" is "Ready"
	I0917 00:29:51.813712  591333 pod_ready.go:86] duration metric: took 4.279249ms for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813720  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.990174  591333 request.go:683] "Waited before sending request" delay="176.338354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671025-m03"
	I0917 00:29:52.190229  591333 request.go:683] "Waited before sending request" delay="196.333995ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:52.193665  591333 pod_ready.go:94] pod "etcd-ha-671025-m03" is "Ready"
	I0917 00:29:52.193693  591333 pod_ready.go:86] duration metric: took 379.968155ms for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.390210  591333 request.go:683] "Waited before sending request" delay="196.377999ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0917 00:29:52.394451  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.590608  591333 request.go:683] "Waited before sending request" delay="196.01886ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025"
	I0917 00:29:52.790098  591333 request.go:683] "Waited before sending request" delay="196.369455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:52.793544  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025" is "Ready"
	I0917 00:29:52.793578  591333 pod_ready.go:86] duration metric: took 399.098458ms for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.793595  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.990070  591333 request.go:683] "Waited before sending request" delay="196.355614ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m02"
	I0917 00:29:53.190086  591333 request.go:683] "Waited before sending request" delay="196.360413ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:53.193284  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m02" is "Ready"
	I0917 00:29:53.193311  591333 pod_ready.go:86] duration metric: took 399.708595ms for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.193320  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.390584  591333 request.go:683] "Waited before sending request" delay="197.147317ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m03"
	I0917 00:29:53.590103  591333 request.go:683] "Waited before sending request" delay="196.290111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:53.593362  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m03" is "Ready"
	I0917 00:29:53.593412  591333 pod_ready.go:86] duration metric: took 400.084881ms for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.790733  591333 request.go:683] "Waited before sending request" delay="197.180718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0917 00:29:53.794548  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.989879  591333 request.go:683] "Waited before sending request" delay="195.193469ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025"
	I0917 00:29:54.190518  591333 request.go:683] "Waited before sending request" delay="197.369336ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:54.194152  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025" is "Ready"
	I0917 00:29:54.194183  591333 pod_ready.go:86] duration metric: took 399.607782ms for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.194195  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.390598  591333 request.go:683] "Waited before sending request" delay="196.290873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m02"
	I0917 00:29:54.590577  591333 request.go:683] "Waited before sending request" delay="196.311056ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:54.594360  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m02" is "Ready"
	I0917 00:29:54.594432  591333 pod_ready.go:86] duration metric: took 400.227353ms for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.594445  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.789830  591333 request.go:683] "Waited before sending request" delay="195.263054ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m03"
	I0917 00:29:54.990466  591333 request.go:683] "Waited before sending request" delay="197.342033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:54.993759  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m03" is "Ready"
	I0917 00:29:54.993788  591333 pod_ready.go:86] duration metric: took 399.335381ms for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.190138  591333 request.go:683] "Waited before sending request" delay="196.195607ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0917 00:29:55.194060  591333 pod_ready.go:83] waiting for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.390543  591333 request.go:683] "Waited before sending request" delay="196.36227ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4k8lz"
	I0917 00:29:55.590492  591333 request.go:683] "Waited before sending request" delay="196.425967ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:55.593719  591333 pod_ready.go:94] pod "kube-proxy-4k8lz" is "Ready"
	I0917 00:29:55.593746  591333 pod_ready.go:86] duration metric: took 399.654072ms for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.593753  591333 pod_ready.go:83] waiting for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.790222  591333 request.go:683] "Waited before sending request" delay="196.381687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f58dt"
	I0917 00:29:55.990078  591333 request.go:683] "Waited before sending request" delay="196.35386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:55.993537  591333 pod_ready.go:94] pod "kube-proxy-f58dt" is "Ready"
	I0917 00:29:55.993565  591333 pod_ready.go:86] duration metric: took 399.806033ms for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.993573  591333 pod_ready.go:83] waiting for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.190000  591333 request.go:683] "Waited before sending request" delay="196.348448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q96zd"
	I0917 00:29:56.390582  591333 request.go:683] "Waited before sending request" delay="197.229029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:56.393563  591333 pod_ready.go:94] pod "kube-proxy-q96zd" is "Ready"
	I0917 00:29:56.393592  591333 pod_ready.go:86] duration metric: took 400.012384ms for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.590057  591333 request.go:683] "Waited before sending request" delay="196.329973ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0917 00:29:56.593914  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.790433  591333 request.go:683] "Waited before sending request" delay="196.375831ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025"
	I0917 00:29:56.990073  591333 request.go:683] "Waited before sending request" delay="196.373603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:56.993259  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025" is "Ready"
	I0917 00:29:56.993288  591333 pod_ready.go:86] duration metric: took 399.350969ms for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.993297  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.190549  591333 request.go:683] "Waited before sending request" delay="197.173424ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m02"
	I0917 00:29:57.390069  591333 request.go:683] "Waited before sending request" delay="196.377477ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:57.393214  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m02" is "Ready"
	I0917 00:29:57.393243  591333 pod_ready.go:86] duration metric: took 399.939467ms for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.393254  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.590599  591333 request.go:683] "Waited before sending request" delay="197.214476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m03"
	I0917 00:29:57.790207  591333 request.go:683] "Waited before sending request" delay="196.332231ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:57.793613  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m03" is "Ready"
	I0917 00:29:57.793646  591333 pod_ready.go:86] duration metric: took 400.384119ms for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.793660  591333 pod_ready.go:40] duration metric: took 6.00487949s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:57.841958  591333 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:29:57.843747  591333 out.go:179] * Done! kubectl is now configured to use "ha-671025" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.206543981Z" level=info msg="Starting container: 1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e" id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.215619295Z" level=info msg="Started container" PID=2320 containerID=1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e description=kube-system/coredns-66bc5c9577-vfj56/coredns id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39dc71832b8bb399ba20ce48f2427629524276766208427b4f7705d2c0d5a7bc
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112704664Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112791033Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130623397Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130669888Z" level=info msg="Adding pod default_busybox-7b57f96db7-wj4r5 to CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142401777Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142574298Z" level=info msg="Checking pod default_busybox-7b57f96db7-wj4r5 for CNI network kindnet (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.143612429Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.144813443Z" level=info msg="Ran pod sandbox 6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f with infra container: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146339053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146578417Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.147237951Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.148635276Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.991719699Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.350447433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.351203929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.352357885Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.353373442Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.354669415Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.358933450Z" level=info msg="Creating container: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.359053527Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.435258478Z" level=info msg="Created container 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.436586730Z" level=info msg="Starting container: 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a" id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.446220694Z" level=info msg="Started container" PID=2585 containerID=7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a description=default/busybox-7b57f96db7-wj4r5/busybox id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7f97d1a1e175b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   0                   6347f27b59723       busybox-7b57f96db7-wj4r5
	1b2322cca7366       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago        Running             coredns                   0                   39dc71832b8bb       coredns-66bc5c9577-vfj56
	2f150c7f7dc18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       0                   f228c8ac21369       storage-provisioner
	4fd73d6446292       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago        Running             coredns                   0                   92ca6f4389168       coredns-66bc5c9577-mqh24
	97d03ed4f05c2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago        Running             kindnet-cni               0                   ad7fd40f66a01       kindnet-9zvhz
	beeb8e61abad9       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      2 minutes ago        Running             kube-proxy                0                   527193be2b767       kube-proxy-f58dt
	ecb56d4cc4c88       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago        Running             kube-vip                  0                   852e4beaeede7       kube-vip-ha-671025
	7a41c39db49f4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      2 minutes ago        Running             kube-scheduler            0                   2a00cabb8a637       kube-scheduler-ha-671025
	d4e775bc05e92       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      2 minutes ago        Running             kube-apiserver            0                   e909c5565b688       kube-apiserver-ha-671025
	b966a80c48716       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      2 minutes ago        Running             kube-controller-manager   0                   9e2f63f3286f1       kube-controller-manager-ha-671025
	7819068a50e98       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      2 minutes ago        Running             etcd                      0                   985f7f1c3407d       etcd-ha-671025
	
	
	==> coredns [1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e] <==
	[INFO] 10.244.0.4:52527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231229s
	[INFO] 10.244.0.4:39416 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.0015558s
	[INFO] 10.244.0.4:45468 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000706318s
	[INFO] 10.244.0.4:53485 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000087472s
	[INFO] 10.244.1.2:37939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156622s
	[INFO] 10.244.1.2:47463 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000147027s
	[INFO] 10.244.2.2:34151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011555178s
	[INFO] 10.244.2.2:39096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.081855349s
	[INFO] 10.244.2.2:40937 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241541s
	[INFO] 10.244.0.4:56066 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205334s
	[INFO] 10.244.0.4:52703 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134531s
	[INFO] 10.244.0.4:56844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000105782s
	[INFO] 10.244.0.4:52436 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144945s
	[INFO] 10.244.1.2:42520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154899s
	[INFO] 10.244.1.2:36438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196498s
	[INFO] 10.244.2.2:42902 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170395s
	[INFO] 10.244.2.2:44897 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143905s
	[INFO] 10.244.0.4:59616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105243s
	[INFO] 10.244.1.2:39631 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002321s
	[INFO] 10.244.1.2:59007 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009976s
	[INFO] 10.244.2.2:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000146002s
	[INFO] 10.244.2.2:56762 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164207s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145402s
	[INFO] 10.244.0.4:37880 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097925s
	[INFO] 10.244.1.2:55010 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144896s
	
	
	==> coredns [4fd73d6446292f190b136d89cd25bf39fce256818f5056f6d2665d5e4fa5ebd5] <==
	[INFO] 10.244.2.2:37478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001401s
	[INFO] 10.244.0.4:32873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013759s
	[INFO] 10.244.0.4:37452 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006758446s
	[INFO] 10.244.0.4:53096 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156627s
	[INFO] 10.244.0.4:33933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125115s
	[INFO] 10.244.1.2:46463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000282565s
	[INFO] 10.244.1.2:39686 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00021884s
	[INFO] 10.244.1.2:54348 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01683783s
	[INFO] 10.244.1.2:54156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247643s
	[INFO] 10.244.1.2:51012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248315s
	[INFO] 10.244.1.2:49586 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095306s
	[INFO] 10.244.2.2:42847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150928s
	[INFO] 10.244.2.2:38291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000461737s
	[INFO] 10.244.0.4:57992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127693s
	[INFO] 10.244.0.4:53956 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219562s
	[INFO] 10.244.0.4:34480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117878s
	[INFO] 10.244.1.2:37372 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177692s
	[INFO] 10.244.1.2:44790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000227814s
	[INFO] 10.244.2.2:55057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193926s
	[INFO] 10.244.2.2:51005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158043s
	[INFO] 10.244.0.4:57976 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144447s
	[INFO] 10.244.0.4:45233 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113362s
	[INFO] 10.244.1.2:59399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116822s
	[INFO] 10.244.1.2:55814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105565s
	[INFO] 10.244.1.2:33844 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129758s
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:31:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf085e2718b148b5ad91c414953b197e
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m44s
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m44s
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m50s
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m44s
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m50s
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m50s
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m50s
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m43s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m54s (x8 over 2m54s)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m54s (x8 over 2m54s)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x8 over 2m54s)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m50s                  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s                  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s                  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m45s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                2m33s                  kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           98s                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:30:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:22 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d9e6a6baf694e3db7d6670efecf289a
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m11s
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m13s
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        2m8s   kube-proxy       
	  Normal  RegisteredNode  2m10s  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode  2m10s  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode  98s    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	Name:               ha-671025-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:31:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-671025-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 660e9daa5dff498295dc0311dee374a4
	  System UUID:                ca019c4e-efee-45a1-854b-8ad90ea7fdf4
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dk9cf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 etcd-ha-671025-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         94s
	  kube-system                 kindnet-9w6f7                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      96s
	  kube-system                 kube-apiserver-ha-671025-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-ha-671025-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-q96zd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-ha-671025-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-vip-ha-671025-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        93s   kube-proxy       
	  Normal  RegisteredNode  95s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  95s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  93s   node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [7819068a50e981a28f7aac6e0ffa00b30498aa7a8728f90c252a1dde8a63172c] <==
	{"level":"warn","ts":"2025-09-17T00:30:09.505268Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.27894ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040018158788372 > lease_revoke:<id:70cc995512839e0c>","response":"size:29"}
	{"level":"warn","ts":"2025-09-17T00:30:09.505347Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.683214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:09.505532Z","caller":"traceutil/trace.go:172","msg":"trace[500820100] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"139.040911ms","start":"2025-09-17T00:30:09.366470Z","end":"2025-09-17T00:30:09.505511Z","steps":["trace[500820100] 'process raft request'  (duration: 138.89516ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:09.505551Z","caller":"traceutil/trace.go:172","msg":"trace[1619350159] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:1004; }","duration":"144.895328ms","start":"2025-09-17T00:30:09.360635Z","end":"2025-09-17T00:30:09.505530Z","steps":["trace[1619350159] 'agreement among raft nodes before linearized reading'  (duration: 141.300792ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:30:09.778515Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.407706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:09.778612Z","caller":"traceutil/trace.go:172","msg":"trace[1181430234] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1005; }","duration":"170.522946ms","start":"2025-09-17T00:30:09.608073Z","end":"2025-09-17T00:30:09.778596Z","steps":["trace[1181430234] 'range keys from in-memory index tree'  (duration: 169.782684ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:26.742546Z","caller":"traceutil/trace.go:172","msg":"trace[1301104523] linearizableReadLoop","detail":"{readStateIndex:1240; appliedIndex:1240; }","duration":"134.800942ms","start":"2025-09-17T00:30:26.607715Z","end":"2025-09-17T00:30:26.742516Z","steps":["trace[1301104523] 'read index received'  (duration: 134.794574ms)","trace[1301104523] 'applied index is now lower than readState.Index'  (duration: 5.057µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:30:26.742702Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.951869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:30:26.742764Z","caller":"traceutil/trace.go:172","msg":"trace[559742275] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1045; }","duration":"135.049537ms","start":"2025-09-17T00:30:26.607704Z","end":"2025-09-17T00:30:26.742754Z","steps":["trace[559742275] 'agreement among raft nodes before linearized reading'  (duration: 134.912912ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:26.742748Z","caller":"traceutil/trace.go:172","msg":"trace[1407010545] transaction","detail":"{read_only:false; response_revision:1046; number_of_response:1; }","duration":"138.186392ms","start":"2025-09-17T00:30:26.604547Z","end":"2025-09-17T00:30:26.742734Z","steps":["trace[1407010545] 'process raft request'  (duration: 138.044509ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:30:27.284481Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"b65d66e84a12b94b","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"23.876704ms"}
	{"level":"warn","ts":"2025-09-17T00:30:27.284588Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"58f1161d61ce118","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"23.977845ms"}
	{"level":"info","ts":"2025-09-17T00:30:27.284875Z","caller":"traceutil/trace.go:172","msg":"trace[1317115850] transaction","detail":"{read_only:false; response_revision:1048; number_of_response:1; }","duration":"128.236157ms","start":"2025-09-17T00:30:27.156624Z","end":"2025-09-17T00:30:27.284860Z","steps":["trace[1317115850] 'process raft request'  (duration: 128.097873ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:30:27.895598Z","caller":"traceutil/trace.go:172","msg":"trace[11920158] transaction","detail":"{read_only:false; response_revision:1050; number_of_response:1; }","duration":"148.026679ms","start":"2025-09-17T00:30:27.747545Z","end":"2025-09-17T00:30:27.895572Z","steps":["trace[11920158] 'process raft request'  (duration: 101.895012ms)","trace[11920158] 'compare'  (duration: 45.996426ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:31:00.426159Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:31:00.426158Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:31:00.433986Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b65d66e84a12b94b","error":"failed to dial b65d66e84a12b94b on stream Message (EOF)"}
	{"level":"warn","ts":"2025-09-17T00:31:00.569496Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b"}
	{"level":"warn","ts":"2025-09-17T00:31:01.265101Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"b65d66e84a12b94b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:31:01.265170Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b65d66e84a12b94b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:31:03.899411Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b"}
	{"level":"warn","ts":"2025-09-17T00:31:05.266629Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"b65d66e84a12b94b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:31:05.266695Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b65d66e84a12b94b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:31:09.267810Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"b65d66e84a12b94b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:31:09.267892Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b65d66e84a12b94b","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:31:14 up  3:13,  0 users,  load average: 0.64, 0.47, 5.01
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [97d03ed4f05c2c8a7edb2014248bdbf3d9cfbee7da82980f69fec92e92471166] <==
	I0917 00:30:31.204015       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:41.203515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:41.203557       1 main.go:301] handling current node
	I0917 00:30:41.203599       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:41.203604       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:30:41.203792       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:41.203806       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:51.212617       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:30:51.212663       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:30:51.212861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:30:51.212872       1 main.go:301] handling current node
	I0917 00:30:51.212888       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:30:51.212893       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:31:01.212057       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:01.212092       1 main.go:301] handling current node
	I0917 00:31:01.212117       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:31:01.212124       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:31:01.212379       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:31:01.212409       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:31:11.203957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:11.204000       1 main.go:301] handling current node
	I0917 00:31:11.204021       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:31:11.204028       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:31:11.204417       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:31:11.204441       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d4e775bc05e92406988cf96c77fa7e581cfe8cc2f3f70e1efc89c2ec23a63e4a] <==
	I0917 00:28:24.764710       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:28:29.928906       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:29.932824       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:30.328091       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0917 00:28:30.429040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:29:34.977143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:29:44.951924       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:30:02.333807       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45142: use of closed network connection
	E0917 00:30:02.515957       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45160: use of closed network connection
	E0917 00:30:02.696738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45172: use of closed network connection
	E0917 00:30:02.975357       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45188: use of closed network connection
	E0917 00:30:03.163201       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45206: use of closed network connection
	E0917 00:30:03.360510       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45214: use of closed network connection
	E0917 00:30:03.537260       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45238: use of closed network connection
	E0917 00:30:03.723220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45262: use of closed network connection
	E0917 00:30:03.899588       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45288: use of closed network connection
	E0917 00:30:04.199638       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45314: use of closed network connection
	E0917 00:30:04.375427       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45330: use of closed network connection
	E0917 00:30:04.546665       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45360: use of closed network connection
	E0917 00:30:04.718966       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45380: use of closed network connection
	E0917 00:30:04.893333       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45402: use of closed network connection
	E0917 00:30:05.069202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45414: use of closed network connection
	I0917 00:30:52.986088       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:31:02.474488       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0917 00:31:04.001528       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-controller-manager [b966a80c487167a8ef5e8ce7981e5a50b500e5d8ce6a71e00ed74b342da31465] <==
	I0917 00:28:29.324302       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:28:29.324327       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:28:29.324356       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:28:29.325297       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0917 00:28:29.325324       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:28:29.325364       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0917 00:28:29.325335       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:28:29.325427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:28:29.326766       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:28:29.333261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:28:29.333638       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:29.333657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:28:29.333665       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:28:29.340961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:28:29.343294       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:28:29.353739       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:44.313285       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0917 00:29:00.309163       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-g7wk8 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-g7wk8\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:00.997925       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m02\" does not exist"
	I0917 00:29:01.017089       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m02" podCIDRs=["10.244.1.0/24"]
	I0917 00:29:04.315749       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m02"
	E0917 00:29:37.100559       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-4vrlk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-4vrlk\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:38.581695       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m03\" does not exist"
	I0917 00:29:38.589924       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m03" podCIDRs=["10.244.2.0/24"]
	I0917 00:29:39.436557       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m03"
	
	
	==> kube-proxy [beeb8e61abad9cff9c53d8b6d7bd473fa1b23bbe18bf4739d34ffc8956376ff2] <==
	I0917 00:28:30.830323       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:28:30.891652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:28:30.992026       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:28:30.992089       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:28:30.992227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:28:31.013108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:28:31.013179       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:28:31.018687       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:28:31.019218       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:28:31.019253       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:28:31.020737       1 config.go:200] "Starting service config controller"
	I0917 00:28:31.020764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:28:31.020800       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:28:31.020809       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:28:31.020897       1 config.go:309] "Starting node config controller"
	I0917 00:28:31.020964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:28:31.021001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:28:31.021018       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:28:31.021055       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:28:31.121005       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:28:31.121031       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:28:31.121168       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a41c39db49f45380d579839f82d520984625d29f4dabaef0381390e6bdf676a] <==
	E0917 00:28:22.635845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:22.635883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:28:22.635646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:28:22.635968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:22.636038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:28:22.636058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:22.636404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:22.636428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:28:22.636582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:28:22.636623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:28:22.636965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:28:23.460819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:23.509027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:23.580561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:23.582654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:23.693685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0917 00:28:26.831507       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:29:01.061353       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:01.061564       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 138da6b8-9faf-407f-8647-78ecb92029f1(kube-system/kindnet-t9sbk) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	E0917 00:29:01.061607       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	I0917 00:29:01.062825       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:38.625075       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	E0917 00:29:38.625173       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 9fe8a312-c296-4c84-9c30-5e578c24e82e(kube-system/kube-proxy-q96zd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	E0917 00:29:38.625194       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	I0917 00:29:38.626798       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	
	
	==> kubelet <==
	Sep 17 00:29:14 ha-671025 kubelet[1668]: E0917 00:29:14.585207    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068954584899808  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:24 ha-671025 kubelet[1668]: E0917 00:29:24.586593    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068964586327984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:24 ha-671025 kubelet[1668]: E0917 00:29:24.586624    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068964586327984  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:34 ha-671025 kubelet[1668]: E0917 00:29:34.587985    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068974587766323  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:34 ha-671025 kubelet[1668]: E0917 00:29:34.588046    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068974587766323  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:44 ha-671025 kubelet[1668]: E0917 00:29:44.589297    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068984589063590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:44 ha-671025 kubelet[1668]: E0917 00:29:44.589343    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068984589063590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:54 ha-671025 kubelet[1668]: E0917 00:29:54.592592    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068994591703153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:54 ha-671025 kubelet[1668]: E0917 00:29:54.592634    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068994591703153  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 17 00:29:58 ha-671025 kubelet[1668]: I0917 00:29:58.902373    1668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7vc\" (UniqueName: \"kubernetes.io/projected/90adda6e-a8af-41fd-880e-3820a76c660d-kube-api-access-2n7vc\") pod \"busybox-7b57f96db7-wj4r5\" (UID: \"90adda6e-a8af-41fd-880e-3820a76c660d\") " pod="default/busybox-7b57f96db7-wj4r5"
	Sep 17 00:30:02 ha-671025 kubelet[1668]: E0917 00:30:02.515952    1668 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41316->127.0.0.1:37239: write tcp 127.0.0.1:41316->127.0.0.1:37239: write: broken pipe
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594113    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594155    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595504    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595637    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597161    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597200    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598240    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598284    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:44 ha-671025 kubelet[1668]: E0917 00:30:44.600122    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069044599859993  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:44 ha-671025 kubelet[1668]: E0917 00:30:44.600164    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069044599859993  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:54 ha-671025 kubelet[1668]: E0917 00:30:54.601918    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069054601658769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:54 ha-671025 kubelet[1668]: E0917 00:30:54.601958    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069054601658769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:04 ha-671025 kubelet[1668]: E0917 00:31:04.604079    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069064603787483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:04 ha-671025 kubelet[1668]: E0917 00:31:04.604118    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069064603787483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (21.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 node start m02 --alsologtostderr -v 5: (8.655990415s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (736.430973ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:24.397906  614473 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:24.398205  614473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:24.398216  614473 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:24.398221  614473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:24.398418  614473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:24.398600  614473 out.go:368] Setting JSON to false
	I0917 00:31:24.398621  614473 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:24.398764  614473 notify.go:220] Checking for updates...
	I0917 00:31:24.399043  614473 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:24.399078  614473 status.go:174] checking status of ha-671025 ...
	I0917 00:31:24.399658  614473 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:24.420182  614473 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:24.420226  614473 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:24.420556  614473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:24.440386  614473 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:24.440716  614473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:24.440760  614473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:24.459689  614473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:24.554183  614473 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:24.558878  614473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:24.571752  614473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:24.632481  614473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:31:24.621785115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:24.633037  614473 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:24.633086  614473 api_server.go:166] Checking apiserver status ...
	I0917 00:31:24.633130  614473 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:24.645894  614473 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:24.656155  614473 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:24.656217  614473 ssh_runner.go:195] Run: ls
	I0917 00:31:24.659931  614473 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:24.664248  614473 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:24.664274  614473 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:24.664288  614473 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:24.664315  614473 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:24.664616  614473 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:24.683401  614473 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:31:24.683432  614473 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:24.683691  614473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:31:24.702117  614473 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:24.702406  614473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:24.702455  614473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:31:24.720508  614473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:31:24.815024  614473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:24.827703  614473 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:24.827735  614473 api_server.go:166] Checking apiserver status ...
	I0917 00:31:24.827771  614473 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:24.839982  614473 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:31:24.851050  614473 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:24.851110  614473 ssh_runner.go:195] Run: ls
	I0917 00:31:24.855328  614473 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:24.859890  614473 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:24.859920  614473 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:31:24.859932  614473 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:24.859955  614473 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:24.860240  614473 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:24.878566  614473 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:24.878593  614473 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:24.878935  614473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:24.898714  614473 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:24.898975  614473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:24.899011  614473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:24.917171  614473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:25.013239  614473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:25.025980  614473 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:25.026014  614473 api_server.go:166] Checking apiserver status ...
	I0917 00:31:25.026066  614473 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:25.039471  614473 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:25.050466  614473 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:25.050517  614473 ssh_runner.go:195] Run: ls
	I0917 00:31:25.054276  614473 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:25.059136  614473 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:25.059160  614473 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:25.059169  614473 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:25.059187  614473 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:25.059563  614473 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:25.079308  614473 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:25.079331  614473 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:25.079339  614473 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:31:25.086672  521273 retry.go:31] will retry after 1.146671459s: exit status 7
E0917 00:31:25.128043  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:25.134573  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:25.146054  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:25.167493  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:25.208942  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:25.290408  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:25.451935  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:25.773677  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
E0917 00:31:26.415946  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (725.709786ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:26.282530  614707 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:26.282659  614707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:26.282671  614707 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:26.282677  614707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:26.282863  614707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:26.283052  614707 out.go:368] Setting JSON to false
	I0917 00:31:26.283074  614707 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:26.283157  614707 notify.go:220] Checking for updates...
	I0917 00:31:26.283452  614707 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:26.283476  614707 status.go:174] checking status of ha-671025 ...
	I0917 00:31:26.283888  614707 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:26.306037  614707 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:26.306115  614707 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:26.306474  614707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:26.325485  614707 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:26.325758  614707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:26.325802  614707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:26.344559  614707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:26.439626  614707 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:26.444495  614707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:26.456989  614707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:26.512408  614707 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:31:26.50232874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:26.512950  614707 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:26.512980  614707 api_server.go:166] Checking apiserver status ...
	I0917 00:31:26.513016  614707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:26.525868  614707 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:26.535968  614707 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:26.536030  614707 ssh_runner.go:195] Run: ls
	I0917 00:31:26.539893  614707 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:26.544239  614707 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:26.544263  614707 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:26.544274  614707 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:26.544290  614707 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:26.544554  614707 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:26.562849  614707 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:31:26.562877  614707 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:26.563191  614707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:31:26.582698  614707 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:26.582988  614707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:26.583029  614707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:31:26.600747  614707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:31:26.693934  614707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:26.706370  614707 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:26.706418  614707 api_server.go:166] Checking apiserver status ...
	I0917 00:31:26.706461  614707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:26.718022  614707 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:31:26.728504  614707 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:26.728578  614707 ssh_runner.go:195] Run: ls
	I0917 00:31:26.732584  614707 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:26.738662  614707 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:26.738687  614707 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:31:26.738695  614707 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:26.738717  614707 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:26.738983  614707 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:26.756764  614707 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:26.756799  614707 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:26.757073  614707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:26.775778  614707 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:26.776031  614707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:26.776076  614707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:26.794152  614707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:26.888650  614707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:26.901927  614707 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:26.901956  614707 api_server.go:166] Checking apiserver status ...
	I0917 00:31:26.901997  614707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:26.914268  614707 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:26.926479  614707 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:26.926567  614707 ssh_runner.go:195] Run: ls
	I0917 00:31:26.931229  614707 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:26.935823  614707 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:26.935850  614707 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:26.935859  614707 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:26.935875  614707 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:26.936122  614707 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:26.954959  614707 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:26.954986  614707 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:26.954994  614707 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:31:26.960961  521273 retry.go:31] will retry after 1.388884339s: exit status 7
E0917 00:31:27.698086  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (750.199977ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:28.405865  614921 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:28.406146  614921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:28.406155  614921 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:28.406160  614921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:28.406409  614921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:28.406603  614921 out.go:368] Setting JSON to false
	I0917 00:31:28.406625  614921 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:28.406777  614921 notify.go:220] Checking for updates...
	I0917 00:31:28.407202  614921 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:28.407242  614921 status.go:174] checking status of ha-671025 ...
	I0917 00:31:28.407797  614921 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:28.430340  614921 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:28.430367  614921 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:28.430646  614921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:28.450650  614921 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:28.450955  614921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:28.451008  614921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:28.472873  614921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:28.567996  614921 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:28.572902  614921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:28.586274  614921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:28.645550  614921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:31:28.634782851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:28.646104  614921 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:28.646139  614921 api_server.go:166] Checking apiserver status ...
	I0917 00:31:28.646175  614921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:28.659214  614921 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:28.670201  614921 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:28.670259  614921 ssh_runner.go:195] Run: ls
	I0917 00:31:28.674427  614921 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:28.679135  614921 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:28.679158  614921 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:28.679169  614921 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:28.679185  614921 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:28.679452  614921 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:28.698858  614921 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:31:28.698886  614921 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:28.699183  614921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:31:28.718264  614921 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:28.718590  614921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:28.718640  614921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:31:28.737796  614921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:31:28.831963  614921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:28.845154  614921 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:28.845183  614921 api_server.go:166] Checking apiserver status ...
	I0917 00:31:28.845217  614921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:28.857891  614921 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:31:28.868589  614921 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:28.868644  614921 ssh_runner.go:195] Run: ls
	I0917 00:31:28.872543  614921 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:28.876887  614921 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:28.876912  614921 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:31:28.876922  614921 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:28.876955  614921 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:28.877236  614921 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:28.896833  614921 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:28.896860  614921 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:28.897161  614921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:28.916356  614921 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:28.916718  614921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:28.916770  614921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:28.935507  614921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:29.030167  614921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:29.044142  614921 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:29.044170  614921 api_server.go:166] Checking apiserver status ...
	I0917 00:31:29.044203  614921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:29.056769  614921 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:29.067555  614921 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:29.067619  614921 ssh_runner.go:195] Run: ls
	I0917 00:31:29.071636  614921 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:29.075940  614921 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:29.075963  614921 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:29.075972  614921 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:29.075988  614921 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:29.076229  614921 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:29.095316  614921 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:29.095337  614921 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:29.095344  614921 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:31:29.101656  521273 retry.go:31] will retry after 1.715794625s: exit status 7
E0917 00:31:30.259573  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (750.906638ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:30.866816  615153 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:30.866927  615153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:30.866935  615153 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:30.866939  615153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:30.867173  615153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:30.867353  615153 out.go:368] Setting JSON to false
	I0917 00:31:30.867376  615153 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:30.867464  615153 notify.go:220] Checking for updates...
	I0917 00:31:30.867778  615153 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:30.867810  615153 status.go:174] checking status of ha-671025 ...
	I0917 00:31:30.868268  615153 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:30.890893  615153 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:30.890934  615153 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:30.891280  615153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:30.912175  615153 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:30.912467  615153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:30.912535  615153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:30.932377  615153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:31.028586  615153 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:31.033220  615153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:31.047328  615153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:31.104744  615153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:31:31.094988823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:31.105355  615153 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:31.105401  615153 api_server.go:166] Checking apiserver status ...
	I0917 00:31:31.105440  615153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:31.118549  615153 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:31.129984  615153 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:31.130060  615153 ssh_runner.go:195] Run: ls
	I0917 00:31:31.134150  615153 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:31.138894  615153 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:31.138928  615153 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:31.138941  615153 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:31.138958  615153 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:31.139300  615153 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:31.159279  615153 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:31:31.159306  615153 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:31.159601  615153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:31:31.178143  615153 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:31.178477  615153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:31.178516  615153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:31:31.199759  615153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:31:31.299186  615153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:31.312315  615153 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:31.312344  615153 api_server.go:166] Checking apiserver status ...
	I0917 00:31:31.312385  615153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:31.325022  615153 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:31:31.335994  615153 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:31.336086  615153 ssh_runner.go:195] Run: ls
	I0917 00:31:31.340329  615153 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:31.344917  615153 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:31.344948  615153 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:31:31.344958  615153 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:31.344989  615153 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:31.345306  615153 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:31.363949  615153 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:31.363977  615153 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:31.364265  615153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:31.383642  615153 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:31.383982  615153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:31.384029  615153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:31.403786  615153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:31.499174  615153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:31.511859  615153 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:31.511904  615153 api_server.go:166] Checking apiserver status ...
	I0917 00:31:31.511948  615153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:31.524372  615153 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:31.535468  615153 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:31.535522  615153 ssh_runner.go:195] Run: ls
	I0917 00:31:31.539835  615153 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:31.544625  615153 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:31.544654  615153 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:31.544663  615153 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:31.544678  615153 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:31.544930  615153 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:31.563728  615153 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:31.563752  615153 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:31.563759  615153 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:31:31.570321  521273 retry.go:31] will retry after 3.620071333s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
E0917 00:31:35.381278  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (763.501941ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:35.240746  615387 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:35.240899  615387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:35.240912  615387 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:35.240918  615387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:35.241126  615387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:35.241322  615387 out.go:368] Setting JSON to false
	I0917 00:31:35.241346  615387 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:35.241472  615387 notify.go:220] Checking for updates...
	I0917 00:31:35.241953  615387 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:35.241992  615387 status.go:174] checking status of ha-671025 ...
	I0917 00:31:35.242632  615387 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:35.264225  615387 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:35.264282  615387 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:35.264714  615387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:35.285821  615387 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:35.286100  615387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:35.286156  615387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:35.305764  615387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:35.400746  615387 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:35.406973  615387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:35.420885  615387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:35.485084  615387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:31:35.473203346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:35.485682  615387 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:35.485715  615387 api_server.go:166] Checking apiserver status ...
	I0917 00:31:35.485753  615387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:35.498671  615387 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:35.510246  615387 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:35.510299  615387 ssh_runner.go:195] Run: ls
	I0917 00:31:35.515010  615387 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:35.521424  615387 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:35.521472  615387 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:35.521489  615387 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:35.521513  615387 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:35.521774  615387 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:35.543091  615387 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:31:35.543119  615387 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:35.543474  615387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:31:35.563454  615387 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:35.563723  615387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:35.563767  615387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:31:35.583637  615387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:31:35.681553  615387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:35.695813  615387 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:35.695857  615387 api_server.go:166] Checking apiserver status ...
	I0917 00:31:35.695890  615387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:35.708112  615387 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:31:35.718934  615387 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:35.719000  615387 ssh_runner.go:195] Run: ls
	I0917 00:31:35.723179  615387 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:35.727653  615387 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:35.727683  615387 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:31:35.727694  615387 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:35.727716  615387 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:35.728123  615387 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:35.747907  615387 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:35.747932  615387 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:35.748238  615387 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:35.766512  615387 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:35.766785  615387 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:35.766847  615387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:35.785902  615387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:35.881343  615387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:35.896872  615387 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:35.896904  615387 api_server.go:166] Checking apiserver status ...
	I0917 00:31:35.896935  615387 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:35.909423  615387 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:35.920487  615387 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:35.920552  615387 ssh_runner.go:195] Run: ls
	I0917 00:31:35.924633  615387 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:35.929011  615387 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:35.929038  615387 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:35.929048  615387 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:35.929063  615387 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:35.929319  615387 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:35.948896  615387 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:35.948922  615387 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:35.948931  615387 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:31:35.955058  521273 retry.go:31] will retry after 6.198470341s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (748.434787ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:42.205139  615620 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:42.205404  615620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:42.205416  615620 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:42.205423  615620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:42.205662  615620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:42.205846  615620 out.go:368] Setting JSON to false
	I0917 00:31:42.205868  615620 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:42.205927  615620 notify.go:220] Checking for updates...
	I0917 00:31:42.206260  615620 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:42.206286  615620 status.go:174] checking status of ha-671025 ...
	I0917 00:31:42.206885  615620 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:42.227194  615620 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:42.227253  615620 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:42.227686  615620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:42.247544  615620 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:42.247902  615620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:42.247962  615620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:42.268148  615620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:42.364611  615620 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:42.369614  615620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:42.382422  615620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:42.449136  615620 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:31:42.437294556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:42.449975  615620 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:42.450014  615620 api_server.go:166] Checking apiserver status ...
	I0917 00:31:42.450073  615620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:42.462801  615620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:42.473749  615620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:42.473800  615620 ssh_runner.go:195] Run: ls
	I0917 00:31:42.477593  615620 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:42.484195  615620 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:42.484223  615620 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:42.484234  615620 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:42.484264  615620 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:42.484558  615620 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:42.503153  615620 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:31:42.503178  615620 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:42.503503  615620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:31:42.522009  615620 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:42.522302  615620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:42.522357  615620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:31:42.541654  615620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:31:42.636035  615620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:42.649069  615620 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:42.649097  615620 api_server.go:166] Checking apiserver status ...
	I0917 00:31:42.649143  615620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:42.660548  615620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:31:42.671280  615620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:42.671339  615620 ssh_runner.go:195] Run: ls
	I0917 00:31:42.676176  615620 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:42.680803  615620 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:42.680830  615620 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:31:42.680839  615620 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:42.680855  615620 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:42.681118  615620 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:42.700287  615620 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:42.700322  615620 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:42.700605  615620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:42.719912  615620 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:42.720255  615620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:42.720302  615620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:42.740459  615620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:42.835357  615620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:42.848978  615620 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:42.849011  615620 api_server.go:166] Checking apiserver status ...
	I0917 00:31:42.849108  615620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:42.861171  615620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:42.871766  615620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:42.871825  615620 ssh_runner.go:195] Run: ls
	I0917 00:31:42.875796  615620 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:42.880546  615620 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:42.880576  615620 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:42.880586  615620 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:42.880603  615620 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:42.880865  615620 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:42.901521  615620 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:42.901545  615620 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:42.901551  615620 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:31:42.907938  521273 retry.go:31] will retry after 4.406144197s: exit status 7
E0917 00:31:45.623104  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (744.443208ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:31:47.364548  615855 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:31:47.364843  615855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:47.364853  615855 out.go:374] Setting ErrFile to fd 2...
	I0917 00:31:47.364857  615855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:31:47.365091  615855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:31:47.365287  615855 out.go:368] Setting JSON to false
	I0917 00:31:47.365311  615855 mustload.go:65] Loading cluster: ha-671025
	I0917 00:31:47.365488  615855 notify.go:220] Checking for updates...
	I0917 00:31:47.365771  615855 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:31:47.365801  615855 status.go:174] checking status of ha-671025 ...
	I0917 00:31:47.366450  615855 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:31:47.389469  615855 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:31:47.389502  615855 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:47.389869  615855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:31:47.408925  615855 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:31:47.409199  615855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:47.409236  615855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:31:47.428484  615855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:31:47.523256  615855 ssh_runner.go:195] Run: systemctl --version
	I0917 00:31:47.528193  615855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:47.540387  615855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:31:47.600225  615855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:31:47.590139126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:31:47.600802  615855 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:47.600835  615855 api_server.go:166] Checking apiserver status ...
	I0917 00:31:47.600868  615855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:47.614523  615855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:31:47.625797  615855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:47.625863  615855 ssh_runner.go:195] Run: ls
	I0917 00:31:47.630141  615855 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:47.634728  615855 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:47.634760  615855 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:31:47.634772  615855 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:47.634792  615855 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:31:47.635109  615855 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:31:47.654232  615855 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:31:47.654261  615855 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:47.654567  615855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:31:47.674163  615855 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:31:47.674670  615855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:47.674732  615855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:31:47.693255  615855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:31:47.787320  615855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:47.800055  615855 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:47.800088  615855 api_server.go:166] Checking apiserver status ...
	I0917 00:31:47.800125  615855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:47.812314  615855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:31:47.823545  615855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:47.823610  615855 ssh_runner.go:195] Run: ls
	I0917 00:31:47.828021  615855 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:47.832729  615855 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:47.832756  615855 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:31:47.832766  615855 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:47.832781  615855 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:31:47.833098  615855 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:31:47.852655  615855 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:31:47.852688  615855 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:47.853002  615855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:31:47.872891  615855 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:31:47.873182  615855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:31:47.873225  615855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:31:47.893865  615855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:31:47.988086  615855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:31:48.001099  615855 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:31:48.001134  615855 api_server.go:166] Checking apiserver status ...
	I0917 00:31:48.001176  615855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:31:48.013664  615855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:31:48.024942  615855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:31:48.024992  615855 ssh_runner.go:195] Run: ls
	I0917 00:31:48.029032  615855 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:31:48.033882  615855 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:31:48.033911  615855 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:31:48.033921  615855 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:31:48.033944  615855 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:31:48.034214  615855 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:31:48.053626  615855 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:31:48.053656  615855 status.go:384] host is not running, skipping remaining checks
	I0917 00:31:48.053664  615855 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0917 00:31:48.060053  521273 retry.go:31] will retry after 13.113448804s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (731.029898ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:32:01.224431  616115 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:32:01.224747  616115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:01.224758  616115 out.go:374] Setting ErrFile to fd 2...
	I0917 00:32:01.224763  616115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:01.224974  616115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:32:01.225175  616115 out.go:368] Setting JSON to false
	I0917 00:32:01.225200  616115 mustload.go:65] Loading cluster: ha-671025
	I0917 00:32:01.225365  616115 notify.go:220] Checking for updates...
	I0917 00:32:01.225593  616115 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:01.225626  616115 status.go:174] checking status of ha-671025 ...
	I0917 00:32:01.226035  616115 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:01.247039  616115 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:32:01.247099  616115 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:32:01.247403  616115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:01.267295  616115 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:32:01.267583  616115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:01.267635  616115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:01.286040  616115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:01.380683  616115 ssh_runner.go:195] Run: systemctl --version
	I0917 00:32:01.385988  616115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:32:01.398367  616115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:01.458011  616115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-17 00:32:01.446680545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:01.458571  616115 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:32:01.458603  616115 api_server.go:166] Checking apiserver status ...
	I0917 00:32:01.458640  616115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:32:01.471220  616115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0917 00:32:01.482480  616115 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:01.482531  616115 ssh_runner.go:195] Run: ls
	I0917 00:32:01.486363  616115 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:32:01.491374  616115 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:32:01.491423  616115 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:32:01.491434  616115 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:32:01.491464  616115 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:32:01.491720  616115 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:32:01.510850  616115 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:32:01.510875  616115 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:32:01.511207  616115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:32:01.529810  616115 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:32:01.530130  616115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:01.530177  616115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:32:01.548874  616115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:32:01.643768  616115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:32:01.656744  616115 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:32:01.656774  616115 api_server.go:166] Checking apiserver status ...
	I0917 00:32:01.656807  616115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:32:01.668799  616115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup
	W0917 00:32:01.679747  616115 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:01.679809  616115 ssh_runner.go:195] Run: ls
	I0917 00:32:01.683830  616115 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:32:01.688473  616115 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:32:01.688500  616115 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:32:01.688510  616115 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:32:01.688525  616115 status.go:174] checking status of ha-671025-m03 ...
	I0917 00:32:01.688768  616115 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:32:01.707212  616115 status.go:371] ha-671025-m03 host status = "Running" (err=<nil>)
	I0917 00:32:01.707237  616115 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:32:01.707533  616115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:32:01.726715  616115 host.go:66] Checking if "ha-671025-m03" exists ...
	I0917 00:32:01.727077  616115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:01.727130  616115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:32:01.746078  616115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:32:01.839947  616115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:32:01.853119  616115 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:32:01.853149  616115 api_server.go:166] Checking apiserver status ...
	I0917 00:32:01.853183  616115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:32:01.864707  616115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0917 00:32:01.874886  616115 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:01.874939  616115 ssh_runner.go:195] Run: ls
	I0917 00:32:01.878674  616115 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:32:01.883049  616115 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:32:01.883081  616115 status.go:463] ha-671025-m03 apiserver status = Running (err=<nil>)
	I0917 00:32:01.883093  616115 status.go:176] ha-671025-m03 status: &{Name:ha-671025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:32:01.883114  616115 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:32:01.883493  616115 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:32:01.902842  616115 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:32:01.902866  616115 status.go:384] host is not running, skipping remaining checks
	I0917 00:32:01.902876  616115 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 591894,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:28:07.642349633Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2947b2c900e461fedf4c1b14afccf677c0bbbd5856a737563908fb819f368e69",
	            "SandboxKey": "/var/run/docker/netns/2947b2c900e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:4e:63:a1:43:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "e04f7d855de79c251547e2cb959967e0ee3cd816f6030c7dc40e9731e31f953c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.239089255s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m03_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt                                                            │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ node    │ ha-671025 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node start m02 --alsologtostderr -v 5                                                                                     │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:31 UTC │ 17 Sep 25 00:31 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:28:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:28:02.421105  591333 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:28:02.421342  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421350  591333 out.go:374] Setting ErrFile to fd 2...
	I0917 00:28:02.421355  591333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:28:02.421569  591333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:28:02.422069  591333 out.go:368] Setting JSON to false
	I0917 00:28:02.422989  591333 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11425,"bootTime":1758057457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:28:02.423098  591333 start.go:140] virtualization: kvm guest
	I0917 00:28:02.425200  591333 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:28:02.426666  591333 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:28:02.426650  591333 notify.go:220] Checking for updates...
	I0917 00:28:02.429221  591333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:28:02.430609  591333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:02.431832  591333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:28:02.433241  591333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:28:02.434707  591333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:28:02.436048  591333 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:28:02.460585  591333 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:28:02.460765  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.517630  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.506821705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.517750  591333 docker.go:318] overlay module found
	I0917 00:28:02.519568  591333 out.go:179] * Using the docker driver based on user configuration
	I0917 00:28:02.520915  591333 start.go:304] selected driver: docker
	I0917 00:28:02.520935  591333 start.go:918] validating driver "docker" against <nil>
	I0917 00:28:02.520951  591333 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:28:02.521682  591333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:28:02.578543  591333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:28:02.56897484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:28:02.578724  591333 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:28:02.578937  591333 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:28:02.580907  591333 out.go:179] * Using Docker driver with root privileges
	I0917 00:28:02.582377  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:02.582477  591333 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 00:28:02.582493  591333 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 00:28:02.582574  591333 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:02.583947  591333 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:28:02.585129  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:02.586454  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:02.587786  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:02.587830  591333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:28:02.587838  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:02.587843  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:02.587944  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:02.587958  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:02.588350  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:02.588379  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json: {Name:mk091aa75e831ff22299b49a9817446c9f212399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:02.609265  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:02.609287  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:02.609305  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:02.609329  591333 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:02.609454  591333 start.go:364] duration metric: took 102.584µs to acquireMachinesLock for "ha-671025"
	I0917 00:28:02.609482  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:02.609540  591333 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:28:02.611610  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:02.611847  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:02.611880  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:02.611969  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:02.612007  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612019  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612089  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:02.612110  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:02.612122  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:02.612504  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:28:02.630138  591333 cli_runner.go:211] docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:28:02.630214  591333 network_create.go:284] running [docker network inspect ha-671025] to gather additional debugging logs...
	I0917 00:28:02.630235  591333 cli_runner.go:164] Run: docker network inspect ha-671025
	W0917 00:28:02.647610  591333 cli_runner.go:211] docker network inspect ha-671025 returned with exit code 1
	I0917 00:28:02.647648  591333 network_create.go:287] error running [docker network inspect ha-671025]: docker network inspect ha-671025: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-671025 not found
	I0917 00:28:02.647665  591333 network_create.go:289] output of [docker network inspect ha-671025]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-671025 not found
	
	** /stderr **
	I0917 00:28:02.647783  591333 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:02.666874  591333 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014926f0}
	I0917 00:28:02.666937  591333 network_create.go:124] attempt to create docker network ha-671025 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 00:28:02.666993  591333 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-671025 ha-671025
	I0917 00:28:02.726570  591333 network_create.go:108] docker network ha-671025 192.168.49.0/24 created
	I0917 00:28:02.726603  591333 kic.go:121] calculated static IP "192.168.49.2" for the "ha-671025" container
	I0917 00:28:02.726684  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:02.744335  591333 cli_runner.go:164] Run: docker volume create ha-671025 --label name.minikube.sigs.k8s.io=ha-671025 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:02.765618  591333 oci.go:103] Successfully created a docker volume ha-671025
	I0917 00:28:02.765710  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --entrypoint /usr/bin/test -v ha-671025:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:03.152134  591333 oci.go:107] Successfully prepared a docker volume ha-671025
	I0917 00:28:03.152201  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:03.152229  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:03.152307  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:07.519336  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.366963199s)
	I0917 00:28:07.519373  591333 kic.go:203] duration metric: took 4.3671415s to extract preloaded images to volume ...
	W0917 00:28:07.519497  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:07.519557  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:07.519606  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:07.583258  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025 --name ha-671025 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025 --network ha-671025 --ip 192.168.49.2 --volume ha-671025:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:07.861983  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Running}}
	I0917 00:28:07.881740  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:07.902486  591333 cli_runner.go:164] Run: docker exec ha-671025 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:07.957445  591333 oci.go:144] the created container "ha-671025" has a running status.
	I0917 00:28:07.957491  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa...
	I0917 00:28:07.970221  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:07.970277  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:07.996810  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.018618  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:08.018648  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:08.065859  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:08.088307  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:08.088464  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:08.112791  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:08.113142  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:08.113159  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:08.114236  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41092->127.0.0.1:33148: read: connection reset by peer
	I0917 00:28:11.250841  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.250869  591333 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:28:11.250946  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.270326  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.270573  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.270589  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:28:11.422194  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:28:11.422282  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.441086  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.441373  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.441412  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:11.579534  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:11.579570  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:11.579606  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:11.579621  591333 provision.go:84] configureAuth start
	I0917 00:28:11.579696  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:11.598338  591333 provision.go:143] copyHostCerts
	I0917 00:28:11.598381  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598438  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:11.598450  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:11.598528  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:11.598637  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598660  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:11.598668  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:11.598709  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:11.598793  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598818  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:11.598827  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:11.598863  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:11.598936  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:28:11.692056  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:11.692126  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:11.692177  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.710836  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:11.809661  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:11.809738  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:11.838472  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:11.838547  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:28:11.864972  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:11.865064  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:11.892502  591333 provision.go:87] duration metric: took 312.863604ms to configureAuth
	I0917 00:28:11.892539  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:11.892749  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:11.892876  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:11.911894  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:11.912108  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0917 00:28:11.912123  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:12.156893  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:12.156918  591333 machine.go:96] duration metric: took 4.068577091s to provisionDockerMachine
	I0917 00:28:12.156929  591333 client.go:171] duration metric: took 9.545042483s to LocalClient.Create
	I0917 00:28:12.156950  591333 start.go:167] duration metric: took 9.54510971s to libmachine.API.Create "ha-671025"
	I0917 00:28:12.156957  591333 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:28:12.156965  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:12.157043  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:12.157079  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.175648  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.275414  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:12.279194  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:12.279224  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:12.279231  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:12.279238  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:12.279255  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:12.279317  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:12.279416  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:12.279430  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:12.279530  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:12.288873  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:12.317418  591333 start.go:296] duration metric: took 160.444141ms for postStartSetup
	I0917 00:28:12.317811  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.336261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:12.336565  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:12.336607  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.354705  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.446983  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:12.451593  591333 start.go:128] duration metric: took 9.842036225s to createHost
	I0917 00:28:12.451634  591333 start.go:83] releasing machines lock for "ha-671025", held for 9.842165682s
	I0917 00:28:12.451714  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:28:12.469798  591333 ssh_runner.go:195] Run: cat /version.json
	I0917 00:28:12.469852  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.469869  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:12.469931  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:12.489508  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.489501  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:12.581676  591333 ssh_runner.go:195] Run: systemctl --version
	I0917 00:28:12.654927  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:12.796661  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:12.802016  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.827191  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:12.827278  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:12.858197  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:12.858222  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:12.858256  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:12.858306  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:12.874462  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:12.887158  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:12.887226  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:12.902417  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:12.917174  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:12.986628  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:13.060583  591333 docker.go:234] disabling docker service ...
	I0917 00:28:13.060656  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:13.081466  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:13.094012  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:13.164943  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:13.315404  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:13.328708  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:13.347694  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:13.347757  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.361221  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:13.361294  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.371972  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.382985  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.394505  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:13.405096  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.416205  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.434282  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:13.445654  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:13.454948  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:13.464245  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:13.526087  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:13.629597  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:13.629677  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:13.634535  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:13.634599  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:13.639122  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:13.675949  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:13.676043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.713216  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:13.752386  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:13.753755  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:13.771156  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:13.775524  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:13.788890  591333 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:28:13.789115  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:13.789184  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.863780  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.863811  591333 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:28:13.863873  591333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:28:13.900999  591333 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:28:13.901021  591333 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:28:13.901028  591333 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:28:13.901149  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:13.901218  591333 ssh_runner.go:195] Run: crio config
	I0917 00:28:13.947330  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:13.947354  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:13.947367  591333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:28:13.947398  591333 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:28:13.947540  591333 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:28:13.947571  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:13.947618  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:13.962176  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:13.962288  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:13.962356  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:13.972318  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:13.972409  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:28:13.982775  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:28:14.003185  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:14.025114  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:28:14.043893  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0917 00:28:14.063914  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:14.067851  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:14.079495  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:14.146352  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:14.170001  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:28:14.170029  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:14.170049  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.170209  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:14.170248  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:14.170258  591333 certs.go:256] generating profile certs ...
	I0917 00:28:14.170312  591333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:14.170334  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt with IP's: []
	I0917 00:28:14.258881  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt ...
	I0917 00:28:14.258912  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt: {Name:mkf356a325e81df463620a9a59f1e19636a8bbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259129  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key ...
	I0917 00:28:14.259150  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key: {Name:mka2338ec2b6b28954ea0ef14eeb3d06111be43d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.259268  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444
	I0917 00:28:14.259285  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0917 00:28:14.420479  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 ...
	I0917 00:28:14.420509  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444: {Name:mkcf98c32344d33f146459467ae0b529b09930e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420720  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 ...
	I0917 00:28:14.420744  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444: {Name:mk2a9dddb825d571b4beb46eeddb7582f0b5a38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.420868  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:14.420963  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.42f16444 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:14.421066  591333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:14.421086  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt with IP's: []
	I0917 00:28:14.667928  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt ...
	I0917 00:28:14.667965  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt: {Name:mk8fc3d9cf0ef31fe8163e3202ec93ff4212c0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668186  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key ...
	I0917 00:28:14.668205  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key: {Name:mk4aadb37423b11008cecd193572dcb26f4156f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:14.668320  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:14.668341  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:14.668351  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:14.668364  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:14.668375  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:14.668386  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:14.668408  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:14.668420  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:14.668487  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:14.668524  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:14.668533  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:14.668554  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:14.668631  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:14.668666  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:14.668710  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:14.668747  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:14.668764  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:14.668780  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.669300  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:14.695942  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:14.721853  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:14.746954  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:14.773182  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:28:14.798782  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:14.823720  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:14.847907  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:14.872531  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:14.900554  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:14.925365  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:14.953903  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:28:14.973565  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:14.979257  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:14.989070  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992786  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.992847  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:14.999827  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:15.009762  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:15.019180  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022635  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.022690  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:15.029591  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:15.039107  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:15.048628  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052181  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.052230  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:15.058893  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:15.069771  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:15.073670  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:15.073738  591333 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:15.073818  591333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:28:15.073904  591333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:28:15.110504  591333 cri.go:89] found id: ""
	I0917 00:28:15.110589  591333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:28:15.119903  591333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:28:15.129328  591333 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:28:15.129384  591333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:28:15.138492  591333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:28:15.138510  591333 kubeadm.go:157] found existing configuration files:
	
	I0917 00:28:15.138563  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:28:15.147903  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:28:15.147969  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:28:15.157062  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:28:15.166583  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:28:15.166646  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:28:15.176378  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.185922  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:28:15.185988  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:28:15.195234  591333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:28:15.204565  591333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:28:15.204624  591333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:28:15.213513  591333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:28:15.268809  591333 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 00:28:15.322273  591333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:28:25.344526  591333 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:28:25.344586  591333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:28:25.344654  591333 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:28:25.344699  591333 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 00:28:25.344758  591333 kubeadm.go:310] OS: Linux
	I0917 00:28:25.344813  591333 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:28:25.344864  591333 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:28:25.344910  591333 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:28:25.344953  591333 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:28:25.345000  591333 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:28:25.345048  591333 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:28:25.345119  591333 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:28:25.345192  591333 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 00:28:25.345263  591333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:28:25.345346  591333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:28:25.345452  591333 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:28:25.345508  591333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:28:25.347069  591333 out.go:252]   - Generating certificates and keys ...
	I0917 00:28:25.347143  591333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:28:25.347233  591333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:28:25.347311  591333 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:28:25.347369  591333 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:28:25.347468  591333 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:28:25.347518  591333 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:28:25.347562  591333 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:28:25.347663  591333 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.347707  591333 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:28:25.347846  591333 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-671025 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:28:25.348037  591333 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:28:25.348142  591333 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:28:25.348209  591333 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:28:25.348278  591333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:28:25.348323  591333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:28:25.348380  591333 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:28:25.348445  591333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:28:25.348531  591333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:28:25.348623  591333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:28:25.348735  591333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:28:25.348831  591333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:28:25.351075  591333 out.go:252]   - Booting up control plane ...
	I0917 00:28:25.351182  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:28:25.351283  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:28:25.351361  591333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:28:25.351548  591333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:28:25.351700  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:28:25.351849  591333 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:28:25.351934  591333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:28:25.351970  591333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:28:25.352082  591333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:28:25.352189  591333 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:28:25.352283  591333 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00103693s
	I0917 00:28:25.352386  591333 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:28:25.352498  591333 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0917 00:28:25.352576  591333 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:28:25.352659  591333 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:28:25.352745  591333 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.008701955s
	I0917 00:28:25.352807  591333 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.208053254s
	I0917 00:28:25.352891  591333 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.501882009s
	I0917 00:28:25.352984  591333 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:28:25.353099  591333 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:28:25.353159  591333 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:28:25.353326  591333 kubeadm.go:310] [mark-control-plane] Marking the node ha-671025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:28:25.353381  591333 kubeadm.go:310] [bootstrap-token] Using token: 945t58.lx3tewj0v31y7u2l
	I0917 00:28:25.354623  591333 out.go:252]   - Configuring RBAC rules ...
	I0917 00:28:25.354715  591333 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:28:25.354845  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:28:25.355014  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:28:25.355187  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:28:25.355345  591333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:28:25.355454  591333 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:28:25.355574  591333 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:28:25.355621  591333 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:28:25.355662  591333 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:28:25.355668  591333 kubeadm.go:310] 
	I0917 00:28:25.355718  591333 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:28:25.355727  591333 kubeadm.go:310] 
	I0917 00:28:25.355804  591333 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:28:25.355810  591333 kubeadm.go:310] 
	I0917 00:28:25.355831  591333 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:28:25.355911  591333 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:28:25.355972  591333 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:28:25.355979  591333 kubeadm.go:310] 
	I0917 00:28:25.356051  591333 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:28:25.356065  591333 kubeadm.go:310] 
	I0917 00:28:25.356135  591333 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:28:25.356143  591333 kubeadm.go:310] 
	I0917 00:28:25.356220  591333 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:28:25.356331  591333 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:28:25.356455  591333 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:28:25.356470  591333 kubeadm.go:310] 
	I0917 00:28:25.356549  591333 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:28:25.356635  591333 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:28:25.356643  591333 kubeadm.go:310] 
	I0917 00:28:25.356717  591333 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.356829  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 00:28:25.356858  591333 kubeadm.go:310] 	--control-plane 
	I0917 00:28:25.356865  591333 kubeadm.go:310] 
	I0917 00:28:25.356941  591333 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:28:25.356947  591333 kubeadm.go:310] 
	I0917 00:28:25.357048  591333 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 945t58.lx3tewj0v31y7u2l \
	I0917 00:28:25.357188  591333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 00:28:25.357207  591333 cni.go:84] Creating CNI manager for ""
	I0917 00:28:25.357216  591333 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 00:28:25.358901  591333 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 00:28:25.360097  591333 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 00:28:25.364931  591333 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 00:28:25.364953  591333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 00:28:25.387094  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 00:28:25.613643  591333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:28:25.613728  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:25.613746  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025 minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=true
	I0917 00:28:25.624073  591333 ops.go:34] apiserver oom_adj: -16
	I0917 00:28:25.696361  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.196672  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:26.696850  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.197218  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:27.696539  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.196491  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:28.696543  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.196814  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:29.696595  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.196581  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:28:30.273337  591333 kubeadm.go:1105] duration metric: took 4.659672583s to wait for elevateKubeSystemPrivileges
	I0917 00:28:30.273483  591333 kubeadm.go:394] duration metric: took 15.19974193s to StartCluster
	I0917 00:28:30.273523  591333 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.273607  591333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:28:30.274607  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:30.274913  591333 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.274945  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:28:30.274948  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:28:30.274965  591333 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:28:30.275045  591333 addons.go:69] Setting storage-provisioner=true in profile "ha-671025"
	I0917 00:28:30.275085  591333 addons.go:238] Setting addon storage-provisioner=true in "ha-671025"
	I0917 00:28:30.275129  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.275048  591333 addons.go:69] Setting default-storageclass=true in profile "ha-671025"
	I0917 00:28:30.275164  591333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-671025"
	I0917 00:28:30.275205  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.275523  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.275665  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.298018  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:28:30.298668  591333 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:28:30.298695  591333 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:28:30.298702  591333 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:28:30.298708  591333 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:28:30.298714  591333 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:28:30.298802  591333 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:28:30.299193  591333 addons.go:238] Setting addon default-storageclass=true in "ha-671025"
	I0917 00:28:30.299247  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:30.299354  591333 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:28:30.299784  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:30.300585  591333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.300605  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:28:30.300669  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.319752  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.321070  591333 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.321101  591333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:28:30.321165  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:30.347717  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:30.362789  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:28:30.443108  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:28:30.467358  591333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:28:30.541692  591333 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 00:28:30.788755  591333 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 00:28:30.790283  591333 addons.go:514] duration metric: took 515.302961ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 00:28:30.790337  591333 start.go:246] waiting for cluster config update ...
	I0917 00:28:30.790355  591333 start.go:255] writing updated cluster config ...
	I0917 00:28:30.792167  591333 out.go:203] 
	I0917 00:28:30.794434  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:30.794553  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.797029  591333 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:28:30.798740  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:28:30.800340  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:28:30.801532  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:30.801576  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:28:30.801656  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:28:30.801701  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:28:30.801721  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:28:30.801837  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:30.826923  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:28:30.826950  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:28:30.826970  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:28:30.827006  591333 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:28:30.827168  591333 start.go:364] duration metric: took 135.604µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:28:30.827198  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:30.827285  591333 start.go:125] createHost starting for "m02" (driver="docker")
	I0917 00:28:30.829869  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:28:30.830019  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:28:30.830056  591333 client.go:168] LocalClient.Create starting
	I0917 00:28:30.830117  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:28:30.830162  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830180  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830241  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:28:30.830266  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:28:30.830274  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:28:30.830527  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:30.850687  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc0018d10b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:28:30.850727  591333 kic.go:121] calculated static IP "192.168.49.3" for the "ha-671025-m02" container
	I0917 00:28:30.850801  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:28:30.869737  591333 cli_runner.go:164] Run: docker volume create ha-671025-m02 --label name.minikube.sigs.k8s.io=ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:28:30.890468  591333 oci.go:103] Successfully created a docker volume ha-671025-m02
	I0917 00:28:30.890596  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --entrypoint /usr/bin/test -v ha-671025-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:28:31.278702  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m02
	I0917 00:28:31.278750  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:28:31.278777  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:28:31.278882  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:28:35.682273  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.403350864s)
	I0917 00:28:35.682311  591333 kic.go:203] duration metric: took 4.403531688s to extract preloaded images to volume ...
	W0917 00:28:35.682411  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:28:35.682448  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:28:35.682488  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:28:35.742164  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m02 --name ha-671025-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m02 --network ha-671025 --ip 192.168.49.3 --volume ha-671025-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:28:36.033045  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Running}}
	I0917 00:28:36.053351  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.072949  591333 cli_runner.go:164] Run: docker exec ha-671025-m02 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:28:36.126815  591333 oci.go:144] the created container "ha-671025-m02" has a running status.
	I0917 00:28:36.126844  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa...
	I0917 00:28:36.161749  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:28:36.161792  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:28:36.189714  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.212082  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:28:36.212109  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:28:36.260306  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:28:36.282829  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:28:36.282954  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:36.312073  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:36.312435  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:36.312461  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:28:36.313226  591333 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47290->127.0.0.1:33153: read: connection reset by peer
	I0917 00:28:39.452508  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.452557  591333 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:28:39.452652  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.472236  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.472561  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.472581  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:28:39.626427  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:28:39.626517  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.645919  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:39.646146  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:39.646163  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:28:39.786717  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:28:39.786756  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:28:39.786781  591333 ubuntu.go:190] setting up certificates
	I0917 00:28:39.786798  591333 provision.go:84] configureAuth start
	I0917 00:28:39.786974  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:39.807773  591333 provision.go:143] copyHostCerts
	I0917 00:28:39.807815  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807847  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:28:39.807858  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:28:39.807932  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:28:39.808029  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808050  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:28:39.808054  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:28:39.808081  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:28:39.808149  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808167  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:28:39.808172  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:28:39.808200  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:28:39.808255  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:28:39.918454  591333 provision.go:177] copyRemoteCerts
	I0917 00:28:39.918537  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:28:39.918589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:39.937978  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.039160  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:28:40.039233  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:28:40.069797  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:28:40.069887  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:28:40.098311  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:28:40.098408  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:28:40.127419  591333 provision.go:87] duration metric: took 340.575644ms to configureAuth
	I0917 00:28:40.127458  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:28:40.127656  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:40.127785  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.147026  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:28:40.147308  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33153 <nil> <nil>}
	I0917 00:28:40.147331  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:28:40.409609  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:28:40.409640  591333 machine.go:96] duration metric: took 4.1267811s to provisionDockerMachine
	I0917 00:28:40.409651  591333 client.go:171] duration metric: took 9.579589798s to LocalClient.Create
	I0917 00:28:40.409674  591333 start.go:167] duration metric: took 9.579655281s to libmachine.API.Create "ha-671025"
	I0917 00:28:40.409684  591333 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:28:40.409696  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:28:40.409769  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:28:40.409816  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.431881  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.535836  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:28:40.540091  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:28:40.540127  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:28:40.540134  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:28:40.540141  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:28:40.540153  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:28:40.540203  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:28:40.540294  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:28:40.540310  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:28:40.540600  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:28:40.551220  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:40.582236  591333 start.go:296] duration metric: took 172.533526ms for postStartSetup
	I0917 00:28:40.582728  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.602550  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:28:40.602895  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:28:40.602973  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.625331  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.720887  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:28:40.725796  591333 start.go:128] duration metric: took 9.898487722s to createHost
	I0917 00:28:40.725827  591333 start.go:83] releasing machines lock for "ha-671025-m02", held for 9.89864483s
	I0917 00:28:40.725898  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:28:40.749075  591333 out.go:179] * Found network options:
	I0917 00:28:40.750936  591333 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:28:40.752439  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:28:40.752503  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:28:40.752575  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:28:40.752624  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.752703  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:28:40.752776  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:28:40.774163  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:40.775400  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:28:41.009369  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:28:41.014989  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.040280  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:28:41.040373  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:28:41.077837  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:28:41.077864  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:28:41.077899  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:28:41.077939  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:28:41.098363  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:28:41.112692  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:28:41.112768  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:28:41.128481  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:28:41.145954  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:28:41.216259  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:28:41.293618  591333 docker.go:234] disabling docker service ...
	I0917 00:28:41.293683  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:28:41.314463  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:28:41.327805  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:28:41.402097  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:28:41.515728  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:28:41.528751  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:28:41.548638  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:28:41.548717  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.563770  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:28:41.563842  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.575236  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.586559  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.599824  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:28:41.612614  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.624744  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.645749  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:28:41.659897  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:28:41.670457  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:28:41.680684  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:41.816654  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:28:41.923179  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:28:41.923241  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:28:41.927246  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:28:41.927309  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:28:41.931155  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:28:41.970363  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:28:41.970470  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.009043  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:28:42.057831  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:28:42.059352  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:28:42.061008  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:28:42.081413  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:28:42.086716  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:42.100745  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:28:42.100976  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:28:42.101278  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:28:42.124810  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:42.125292  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:28:42.125333  591333 certs.go:194] generating shared ca certs ...
	I0917 00:28:42.125361  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:42.125545  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:28:42.125614  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:28:42.125626  591333 certs.go:256] generating profile certs ...
	I0917 00:28:42.125787  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:28:42.125831  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c
	I0917 00:28:42.125848  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:28:43.131520  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c ...
	I0917 00:28:43.131559  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c: {Name:mk97bbbbe985039a36a56311ec983801d49afc24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131793  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c ...
	I0917 00:28:43.131814  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c: {Name:mk2a126624b47a1fbca817c2bf7b065e9ee5a854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:28:43.131938  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:28:43.132097  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:28:43.132233  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:28:43.132252  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:28:43.132265  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:28:43.132275  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:28:43.132286  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:28:43.132296  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:28:43.132308  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:28:43.132318  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:28:43.132330  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:28:43.132385  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:28:43.132425  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:28:43.132435  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:28:43.132458  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:28:43.132480  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:28:43.132500  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:28:43.132536  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:28:43.132561  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.132576  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.132588  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.132646  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:43.152207  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:43.242834  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:28:43.247724  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:28:43.261684  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:28:43.265651  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:28:43.279426  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:28:43.283200  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:28:43.298316  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:28:43.302656  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:28:43.316567  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:28:43.320915  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:28:43.334735  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:28:43.339251  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:28:43.354686  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:28:43.382622  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:28:43.411140  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:28:43.439208  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:28:43.468797  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0917 00:28:43.497239  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:28:43.525628  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:28:43.552854  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:28:43.579567  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:28:43.613480  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:28:43.640927  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:28:43.668098  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:28:43.688016  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:28:43.709638  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:28:43.729987  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:28:43.751570  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:28:43.772873  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:28:43.793231  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:28:43.813996  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:28:43.820372  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:28:43.831827  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836450  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.836601  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:28:43.845799  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:28:43.858335  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:28:43.870361  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874499  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.874557  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:28:43.882167  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:28:43.894006  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:28:43.906727  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910868  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.910926  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:28:43.918600  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:28:43.930014  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:28:43.933717  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:28:43.933786  591333 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:28:43.933892  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:28:43.933920  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:28:43.933956  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:28:43.949251  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:28:43.949348  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:28:43.949436  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:28:43.959785  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:28:43.959858  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:28:43.970815  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:28:43.992525  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:28:44.016479  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:28:44.038080  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:28:44.042531  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:28:44.055802  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:28:44.123804  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:28:44.146604  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:28:44.146887  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:28:44.146991  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:28:44.147052  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:28:44.166636  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:28:44.318607  591333 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:28:44.318654  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0917 00:29:01.319807  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9ffj9m.gils691l0zbv1gz9 --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.001126344s)
	I0917 00:29:01.319840  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:01.532514  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m02 minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:01.623743  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:01.704118  591333 start.go:319] duration metric: took 17.557224287s to joinCluster
	I0917 00:29:01.704207  591333 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:01.704539  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:01.705687  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:01.707014  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:01.810630  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:01.824161  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:01.824231  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:01.824550  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	W0917 00:29:03.828446  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:05.829871  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:08.329045  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:10.828964  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:13.328972  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	W0917 00:29:15.828569  591333 node_ready.go:57] node "ha-671025-m02" has "Ready":"False" status (will retry)
	I0917 00:29:16.328859  591333 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:29:16.328891  591333 node_ready.go:38] duration metric: took 14.504319776s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:29:16.328908  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:16.328959  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:16.341005  591333 api_server.go:72] duration metric: took 14.636761134s to wait for apiserver process to appear ...
	I0917 00:29:16.341029  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:16.341048  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:16.345248  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:16.346148  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:16.346174  591333 api_server.go:131] duration metric: took 5.137742ms to wait for apiserver health ...
	I0917 00:29:16.346183  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:16.351147  591333 system_pods.go:59] 17 kube-system pods found
	I0917 00:29:16.351175  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.351180  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.351184  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.351187  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.351190  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.351194  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.351198  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.351203  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.351206  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.351210  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.351213  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.351216  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.351219  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.351222  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.351225  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.351227  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.351230  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.351235  591333 system_pods.go:74] duration metric: took 5.047428ms to wait for pod list to return data ...
	I0917 00:29:16.351245  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:16.354087  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:16.354107  591333 default_sa.go:55] duration metric: took 2.857135ms for default service account to be created ...
	I0917 00:29:16.354115  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:16.357519  591333 system_pods.go:86] 17 kube-system pods found
	I0917 00:29:16.357544  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:16.357550  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:16.357555  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:16.357560  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:16.357565  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:16.357570  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:16.357576  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:16.357582  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:16.357591  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:16.357599  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:16.357605  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:16.357611  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:16.357614  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:16.357619  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:16.357623  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:16.357630  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:16.357633  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:16.357642  591333 system_pods.go:126] duration metric: took 3.522377ms to wait for k8s-apps to be running ...
	I0917 00:29:16.357652  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:16.357710  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:16.370259  591333 system_svc.go:56] duration metric: took 12.594604ms WaitForService to wait for kubelet
	I0917 00:29:16.370292  591333 kubeadm.go:578] duration metric: took 14.666051199s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:16.370351  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:16.373484  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373509  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373526  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:16.373531  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:16.373545  591333 node_conditions.go:105] duration metric: took 3.187263ms to run NodePressure ...
	I0917 00:29:16.373563  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:16.373599  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:16.375540  591333 out.go:203] 
	I0917 00:29:16.376982  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:16.377123  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.378689  591333 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:29:16.380127  591333 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:29:16.381271  591333 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:29:16.382178  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.382203  591333 cache.go:58] Caching tarball of preloaded images
	I0917 00:29:16.382278  591333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:29:16.382305  591333 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:29:16.382314  591333 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:29:16.382434  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:16.405280  591333 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:29:16.405301  591333 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:29:16.405319  591333 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:29:16.405349  591333 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:29:16.405476  591333 start.go:364] duration metric: took 109.564µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:29:16.405502  591333 start.go:93] Provisioning new machine with config: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:16.405601  591333 start.go:125] createHost starting for "m03" (driver="docker")
	I0917 00:29:16.408212  591333 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 00:29:16.408326  591333 start.go:159] libmachine.API.Create for "ha-671025" (driver="docker")
	I0917 00:29:16.408364  591333 client.go:168] LocalClient.Create starting
	I0917 00:29:16.408459  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 00:29:16.408501  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408515  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408569  591333 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 00:29:16.408588  591333 main.go:141] libmachine: Decoding PEM data...
	I0917 00:29:16.408596  591333 main.go:141] libmachine: Parsing certificate...
	I0917 00:29:16.408797  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:16.428129  591333 network_create.go:77] Found existing network {name:ha-671025 subnet:0xc001a2abd0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0917 00:29:16.428169  591333 kic.go:121] calculated static IP "192.168.49.4" for the "ha-671025-m03" container
	I0917 00:29:16.428233  591333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:29:16.447362  591333 cli_runner.go:164] Run: docker volume create ha-671025-m03 --label name.minikube.sigs.k8s.io=ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:29:16.467514  591333 oci.go:103] Successfully created a docker volume ha-671025-m03
	I0917 00:29:16.467629  591333 cli_runner.go:164] Run: docker run --rm --name ha-671025-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --entrypoint /usr/bin/test -v ha-671025-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:29:16.870641  591333 oci.go:107] Successfully prepared a docker volume ha-671025-m03
	I0917 00:29:16.870686  591333 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:29:16.870713  591333 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:29:16.870789  591333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:29:21.201351  591333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-671025-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.33049988s)
	I0917 00:29:21.201386  591333 kic.go:203] duration metric: took 4.330670212s to extract preloaded images to volume ...
	W0917 00:29:21.201499  591333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 00:29:21.201529  591333 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 00:29:21.201570  591333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:29:21.257059  591333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-671025-m03 --name ha-671025-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-671025-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-671025-m03 --network ha-671025 --ip 192.168.49.4 --volume ha-671025-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:29:21.526231  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Running}}
	I0917 00:29:21.546352  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.567256  591333 cli_runner.go:164] Run: docker exec ha-671025-m03 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:29:21.619083  591333 oci.go:144] the created container "ha-671025-m03" has a running status.
	I0917 00:29:21.619117  591333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa...
	I0917 00:29:21.831158  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0917 00:29:21.831204  591333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:29:21.864081  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.886560  591333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:29:21.886587  591333 kic_runner.go:114] Args: [docker exec --privileged ha-671025-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:29:21.939241  591333 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:29:21.960815  591333 machine.go:93] provisionDockerMachine start ...
	I0917 00:29:21.961005  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:21.982259  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:21.982549  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:21.982571  591333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:29:22.123516  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.123558  591333 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:29:22.123633  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.143852  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.144070  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.144083  591333 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:29:22.298146  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:29:22.298229  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.317607  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.317851  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.317875  591333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:29:22.455839  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:29:22.455874  591333 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:29:22.455894  591333 ubuntu.go:190] setting up certificates
	I0917 00:29:22.455908  591333 provision.go:84] configureAuth start
	I0917 00:29:22.455983  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:22.474745  591333 provision.go:143] copyHostCerts
	I0917 00:29:22.474791  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474821  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:29:22.474830  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:29:22.474900  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:29:22.474988  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475015  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:29:22.475028  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:29:22.475061  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:29:22.475116  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475134  591333 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:29:22.475141  591333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:29:22.475164  591333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:29:22.475216  591333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:29:22.562518  591333 provision.go:177] copyRemoteCerts
	I0917 00:29:22.562597  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:29:22.562645  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.582491  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:22.681516  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:29:22.681585  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:29:22.711977  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:29:22.712070  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:29:22.739378  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:29:22.739454  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:29:22.767225  591333 provision.go:87] duration metric: took 311.299307ms to configureAuth
	I0917 00:29:22.767254  591333 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:29:22.767513  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:22.767641  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:22.787106  591333 main.go:141] libmachine: Using SSH client type: native
	I0917 00:29:22.787322  591333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0917 00:29:22.787337  591333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:29:23.027585  591333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:29:23.027618  591333 machine.go:96] duration metric: took 1.066782115s to provisionDockerMachine
	I0917 00:29:23.027629  591333 client.go:171] duration metric: took 6.619257203s to LocalClient.Create
	I0917 00:29:23.027644  591333 start.go:167] duration metric: took 6.619319411s to libmachine.API.Create "ha-671025"
	I0917 00:29:23.027653  591333 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:29:23.027665  591333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:29:23.027739  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:29:23.027789  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.048535  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.148623  591333 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:29:23.152295  591333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:29:23.152333  591333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:29:23.152344  591333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:29:23.152354  591333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:29:23.152402  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:29:23.152478  591333 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:29:23.152577  591333 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:29:23.152589  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:29:23.152698  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:29:23.162366  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:23.192510  591333 start.go:296] duration metric: took 164.839418ms for postStartSetup
	I0917 00:29:23.192875  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.211261  591333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:29:23.211545  591333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:29:23.211589  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.228367  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.323873  591333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:29:23.328453  591333 start.go:128] duration metric: took 6.922836798s to createHost
	I0917 00:29:23.328480  591333 start.go:83] releasing machines lock for "ha-671025-m03", held for 6.9229927s
	I0917 00:29:23.328559  591333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:29:23.348699  591333 out.go:179] * Found network options:
	I0917 00:29:23.350091  591333 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:29:23.351262  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351286  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351307  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:29:23.351319  591333 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:29:23.351413  591333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:29:23.351457  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.351483  591333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:29:23.351555  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:29:23.370656  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.370963  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:29:23.603202  591333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:29:23.608556  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.632987  591333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:29:23.633078  591333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:29:23.665413  591333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:29:23.665445  591333 start.go:495] detecting cgroup driver to use...
	I0917 00:29:23.665479  591333 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:29:23.665582  591333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:29:23.682513  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:29:23.695198  591333 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:29:23.695265  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:29:23.710235  591333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:29:23.725450  591333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:29:23.796030  591333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:29:23.870255  591333 docker.go:234] disabling docker service ...
	I0917 00:29:23.870317  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:29:23.889003  591333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:29:23.901613  591333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:29:23.973987  591333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:29:24.138099  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:29:24.150712  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:29:24.168641  591333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:29:24.168702  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.181874  591333 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:29:24.181936  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.193571  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.204646  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.215806  591333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:29:24.225866  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.236708  591333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.254758  591333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:29:24.266984  591333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:29:24.276695  591333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:29:24.286587  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:24.356850  591333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:29:24.461065  591333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:29:24.461156  591333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:29:24.465833  591333 start.go:563] Will wait 60s for crictl version
	I0917 00:29:24.465903  591333 ssh_runner.go:195] Run: which crictl
	I0917 00:29:24.469817  591333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:29:24.506319  591333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:29:24.506419  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.544050  591333 ssh_runner.go:195] Run: crio --version
	I0917 00:29:24.583372  591333 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:29:24.584727  591333 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:29:24.586235  591333 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:29:24.587662  591333 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:29:24.605611  591333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:29:24.610151  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:24.622865  591333 mustload.go:65] Loading cluster: ha-671025
	I0917 00:29:24.623090  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:24.623289  591333 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:29:24.641474  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:24.641732  591333 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:29:24.641743  591333 certs.go:194] generating shared ca certs ...
	I0917 00:29:24.641758  591333 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.641894  591333 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:29:24.641944  591333 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:29:24.641954  591333 certs.go:256] generating profile certs ...
	I0917 00:29:24.642025  591333 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:29:24.642065  591333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:29:24.642081  591333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:29:24.856212  591333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 ...
	I0917 00:29:24.856249  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7: {Name:mk65d29cf7ba29b99ab2056d134a2884f928fccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856490  591333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 ...
	I0917 00:29:24.856512  591333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7: {Name:mkd89da6d4d9fb3421e5c7677b39452bd32f11a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:29:24.856628  591333 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:29:24.856803  591333 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:29:24.856940  591333 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:29:24.856957  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:29:24.856970  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:29:24.856984  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:29:24.857022  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:29:24.857038  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:29:24.857051  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:29:24.857063  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:29:24.857073  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:29:24.857137  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:29:24.857169  591333 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:29:24.857179  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:29:24.857203  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:29:24.857236  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:29:24.857259  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:29:24.857298  591333 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:29:24.857323  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:24.857336  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:29:24.857410  591333 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:29:24.857487  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:24.876681  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:24.965759  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:29:24.970077  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:29:24.983505  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:29:24.987459  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:29:25.001249  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:29:25.005139  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:29:25.019000  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:29:25.023277  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:29:25.037665  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:29:25.041486  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:29:25.056004  591333 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:29:25.060379  591333 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:29:25.075527  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:29:25.103048  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:29:25.130436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:29:25.156335  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:29:25.183962  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0917 00:29:25.210290  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:29:25.237850  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:29:25.264713  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:29:25.292266  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:29:25.322436  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:29:25.349159  591333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:29:25.376714  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:29:25.397066  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:29:25.416141  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:29:25.436031  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:29:25.455195  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:29:25.475694  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:29:25.494981  591333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:29:25.514182  591333 ssh_runner.go:195] Run: openssl version
	I0917 00:29:25.519757  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:29:25.530366  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534300  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.534372  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:29:25.541463  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:29:25.551798  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:29:25.562696  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566820  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.566898  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:29:25.575288  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:29:25.585578  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:29:25.596219  591333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.599949  591333 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.600000  591333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:29:25.608220  591333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:29:25.620163  591333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:29:25.623987  591333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:29:25.624048  591333 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:29:25.624137  591333 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:29:25.624164  591333 kube-vip.go:115] generating kube-vip config ...
	I0917 00:29:25.624201  591333 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:29:25.637994  591333 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:29:25.638073  591333 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:29:25.638135  591333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:29:25.647722  591333 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:29:25.647792  591333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:29:25.658193  591333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:29:25.679949  591333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:29:25.703178  591333 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:29:25.726279  591333 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:29:25.730482  591333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:29:25.743251  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:25.813167  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:25.837618  591333 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:29:25.837905  591333 start.go:317] joinCluster: &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:29:25.838070  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0917 00:29:25.838130  591333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:29:25.859495  591333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:29:26.008672  591333 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:26.008736  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0917 00:29:38.691373  591333 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p1m8ud.vg6wowozjxeubnbu --discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-671025-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (12.682606276s)
	I0917 00:29:38.691443  591333 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0917 00:29:38.941535  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-671025-m03 minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=ha-671025 minikube.k8s.io/primary=false
	I0917 00:29:39.021358  591333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-671025-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0917 00:29:39.107652  591333 start.go:319] duration metric: took 13.269740721s to joinCluster
	I0917 00:29:39.107734  591333 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:29:39.108038  591333 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:29:39.109032  591333 out.go:179] * Verifying Kubernetes components...
	I0917 00:29:39.110170  591333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:29:39.212840  591333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:29:39.228175  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:29:39.228249  591333 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:29:39.228513  591333 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	W0917 00:29:41.232779  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:43.732906  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:46.232976  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:48.732961  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	W0917 00:29:51.232362  591333 node_ready.go:57] node "ha-671025-m03" has "Ready":"False" status (will retry)
	I0917 00:29:51.732347  591333 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:29:51.732379  591333 node_ready.go:38] duration metric: took 12.503848437s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:29:51.732413  591333 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:29:51.732477  591333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:29:51.745126  591333 api_server.go:72] duration metric: took 12.637355364s to wait for apiserver process to appear ...
	I0917 00:29:51.745157  591333 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:29:51.745182  591333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:29:51.751075  591333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:29:51.752025  591333 api_server.go:141] control plane version: v1.34.0
	I0917 00:29:51.752049  591333 api_server.go:131] duration metric: took 6.885054ms to wait for apiserver health ...
	I0917 00:29:51.752060  591333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:29:51.758905  591333 system_pods.go:59] 24 kube-system pods found
	I0917 00:29:51.758940  591333 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.758949  591333 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.758957  591333 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.758963  591333 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.758968  591333 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.758973  591333 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.758978  591333 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.758990  591333 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.758995  591333 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.759000  591333 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.759004  591333 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.759009  591333 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.759018  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.759023  591333 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.759027  591333 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.759035  591333 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.759039  591333 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.759049  591333 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.759054  591333 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.759058  591333 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.759066  591333 system_pods.go:61] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.759070  591333 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.759075  591333 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.759079  591333 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.759086  591333 system_pods.go:74] duration metric: took 7.019861ms to wait for pod list to return data ...
	I0917 00:29:51.759106  591333 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:29:51.761820  591333 default_sa.go:45] found service account: "default"
	I0917 00:29:51.761841  591333 default_sa.go:55] duration metric: took 2.726063ms for default service account to be created ...
	I0917 00:29:51.761850  591333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:29:51.766999  591333 system_pods.go:86] 24 kube-system pods found
	I0917 00:29:51.767031  591333 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running
	I0917 00:29:51.767037  591333 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running
	I0917 00:29:51.767041  591333 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:29:51.767044  591333 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:29:51.767047  591333 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:29:51.767050  591333 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:29:51.767053  591333 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:29:51.767057  591333 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:29:51.767060  591333 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:29:51.767062  591333 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:29:51.767066  591333 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:29:51.767069  591333 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:29:51.767072  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:29:51.767075  591333 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:29:51.767078  591333 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:29:51.767081  591333 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:29:51.767084  591333 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:29:51.767087  591333 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:29:51.767089  591333 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:29:51.767093  591333 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:29:51.767095  591333 system_pods.go:89] "kube-vip-ha-671025" [d18d568e-7183-4cb4-898f-c730aa8b9811] Running
	I0917 00:29:51.767099  591333 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:29:51.767105  591333 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:29:51.767108  591333 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:29:51.767115  591333 system_pods.go:126] duration metric: took 5.259145ms to wait for k8s-apps to be running ...
	I0917 00:29:51.767125  591333 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:29:51.767173  591333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:29:51.780761  591333 system_svc.go:56] duration metric: took 13.623242ms WaitForService to wait for kubelet
	I0917 00:29:51.780795  591333 kubeadm.go:578] duration metric: took 12.673026165s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:29:51.780819  591333 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:29:51.783987  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784014  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784059  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784065  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784075  591333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:29:51.784081  591333 node_conditions.go:123] node cpu capacity is 8
	I0917 00:29:51.784090  591333 node_conditions.go:105] duration metric: took 3.264516ms to run NodePressure ...
	I0917 00:29:51.784106  591333 start.go:241] waiting for startup goroutines ...
	I0917 00:29:51.784138  591333 start.go:255] writing updated cluster config ...
	I0917 00:29:51.784529  591333 ssh_runner.go:195] Run: rm -f paused
	I0917 00:29:51.788748  591333 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:51.789284  591333 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:29:51.792784  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.797966  591333 pod_ready.go:94] pod "coredns-66bc5c9577-mqh24" is "Ready"
	I0917 00:29:51.797991  591333 pod_ready.go:86] duration metric: took 5.183268ms for pod "coredns-66bc5c9577-mqh24" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.798004  591333 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.802611  591333 pod_ready.go:94] pod "coredns-66bc5c9577-vfj56" is "Ready"
	I0917 00:29:51.802634  591333 pod_ready.go:86] duration metric: took 4.623535ms for pod "coredns-66bc5c9577-vfj56" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.805006  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809379  591333 pod_ready.go:94] pod "etcd-ha-671025" is "Ready"
	I0917 00:29:51.809416  591333 pod_ready.go:86] duration metric: took 4.389649ms for pod "etcd-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.809427  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813691  591333 pod_ready.go:94] pod "etcd-ha-671025-m02" is "Ready"
	I0917 00:29:51.813712  591333 pod_ready.go:86] duration metric: took 4.279249ms for pod "etcd-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.813720  591333 pod_ready.go:83] waiting for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:51.990174  591333 request.go:683] "Waited before sending request" delay="176.338354ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671025-m03"
	I0917 00:29:52.190229  591333 request.go:683] "Waited before sending request" delay="196.333995ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:52.193665  591333 pod_ready.go:94] pod "etcd-ha-671025-m03" is "Ready"
	I0917 00:29:52.193693  591333 pod_ready.go:86] duration metric: took 379.968155ms for pod "etcd-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.390210  591333 request.go:683] "Waited before sending request" delay="196.377999ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0917 00:29:52.394451  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.590608  591333 request.go:683] "Waited before sending request" delay="196.01886ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025"
	I0917 00:29:52.790098  591333 request.go:683] "Waited before sending request" delay="196.369455ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:52.793544  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025" is "Ready"
	I0917 00:29:52.793578  591333 pod_ready.go:86] duration metric: took 399.098458ms for pod "kube-apiserver-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.793595  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:52.990070  591333 request.go:683] "Waited before sending request" delay="196.355614ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m02"
	I0917 00:29:53.190086  591333 request.go:683] "Waited before sending request" delay="196.360413ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:53.193284  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m02" is "Ready"
	I0917 00:29:53.193311  591333 pod_ready.go:86] duration metric: took 399.708595ms for pod "kube-apiserver-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.193320  591333 pod_ready.go:83] waiting for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.390584  591333 request.go:683] "Waited before sending request" delay="197.147317ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671025-m03"
	I0917 00:29:53.590103  591333 request.go:683] "Waited before sending request" delay="196.290111ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:53.593362  591333 pod_ready.go:94] pod "kube-apiserver-ha-671025-m03" is "Ready"
	I0917 00:29:53.593412  591333 pod_ready.go:86] duration metric: took 400.084881ms for pod "kube-apiserver-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.790733  591333 request.go:683] "Waited before sending request" delay="197.180718ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0917 00:29:53.794548  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:53.989879  591333 request.go:683] "Waited before sending request" delay="195.193469ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025"
	I0917 00:29:54.190518  591333 request.go:683] "Waited before sending request" delay="197.369336ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:54.194152  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025" is "Ready"
	I0917 00:29:54.194183  591333 pod_ready.go:86] duration metric: took 399.607782ms for pod "kube-controller-manager-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.194195  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.390598  591333 request.go:683] "Waited before sending request" delay="196.290873ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m02"
	I0917 00:29:54.590577  591333 request.go:683] "Waited before sending request" delay="196.311056ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:54.594360  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m02" is "Ready"
	I0917 00:29:54.594432  591333 pod_ready.go:86] duration metric: took 400.227353ms for pod "kube-controller-manager-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.594445  591333 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:54.789830  591333 request.go:683] "Waited before sending request" delay="195.263054ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671025-m03"
	I0917 00:29:54.990466  591333 request.go:683] "Waited before sending request" delay="197.342033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:54.993759  591333 pod_ready.go:94] pod "kube-controller-manager-ha-671025-m03" is "Ready"
	I0917 00:29:54.993788  591333 pod_ready.go:86] duration metric: took 399.335381ms for pod "kube-controller-manager-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.190138  591333 request.go:683] "Waited before sending request" delay="196.195607ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0917 00:29:55.194060  591333 pod_ready.go:83] waiting for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.390543  591333 request.go:683] "Waited before sending request" delay="196.36227ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4k8lz"
	I0917 00:29:55.590492  591333 request.go:683] "Waited before sending request" delay="196.425967ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:55.593719  591333 pod_ready.go:94] pod "kube-proxy-4k8lz" is "Ready"
	I0917 00:29:55.593746  591333 pod_ready.go:86] duration metric: took 399.654072ms for pod "kube-proxy-4k8lz" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.593753  591333 pod_ready.go:83] waiting for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.790222  591333 request.go:683] "Waited before sending request" delay="196.381687ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f58dt"
	I0917 00:29:55.990078  591333 request.go:683] "Waited before sending request" delay="196.35386ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:55.993537  591333 pod_ready.go:94] pod "kube-proxy-f58dt" is "Ready"
	I0917 00:29:55.993565  591333 pod_ready.go:86] duration metric: took 399.806033ms for pod "kube-proxy-f58dt" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:55.993573  591333 pod_ready.go:83] waiting for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.190000  591333 request.go:683] "Waited before sending request" delay="196.348448ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q96zd"
	I0917 00:29:56.390582  591333 request.go:683] "Waited before sending request" delay="197.229029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:56.393563  591333 pod_ready.go:94] pod "kube-proxy-q96zd" is "Ready"
	I0917 00:29:56.393592  591333 pod_ready.go:86] duration metric: took 400.012384ms for pod "kube-proxy-q96zd" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.590057  591333 request.go:683] "Waited before sending request" delay="196.329973ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0917 00:29:56.593914  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.790433  591333 request.go:683] "Waited before sending request" delay="196.375831ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025"
	I0917 00:29:56.990073  591333 request.go:683] "Waited before sending request" delay="196.373603ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025"
	I0917 00:29:56.993259  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025" is "Ready"
	I0917 00:29:56.993288  591333 pod_ready.go:86] duration metric: took 399.350969ms for pod "kube-scheduler-ha-671025" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:56.993297  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.190549  591333 request.go:683] "Waited before sending request" delay="197.173424ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m02"
	I0917 00:29:57.390069  591333 request.go:683] "Waited before sending request" delay="196.377477ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m02"
	I0917 00:29:57.393214  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m02" is "Ready"
	I0917 00:29:57.393243  591333 pod_ready.go:86] duration metric: took 399.939467ms for pod "kube-scheduler-ha-671025-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.393254  591333 pod_ready.go:83] waiting for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.590599  591333 request.go:683] "Waited before sending request" delay="197.214476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671025-m03"
	I0917 00:29:57.790207  591333 request.go:683] "Waited before sending request" delay="196.332231ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-671025-m03"
	I0917 00:29:57.793613  591333 pod_ready.go:94] pod "kube-scheduler-ha-671025-m03" is "Ready"
	I0917 00:29:57.793646  591333 pod_ready.go:86] duration metric: took 400.384119ms for pod "kube-scheduler-ha-671025-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:29:57.793660  591333 pod_ready.go:40] duration metric: took 6.00487949s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:29:57.841958  591333 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:29:57.843747  591333 out.go:179] * Done! kubectl is now configured to use "ha-671025" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.206543981Z" level=info msg="Starting container: 1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e" id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:28:42 ha-671025 crio[943]: time="2025-09-17 00:28:42.215619295Z" level=info msg="Started container" PID=2320 containerID=1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e description=kube-system/coredns-66bc5c9577-vfj56/coredns id=3b28becd-1d34-462d-9922-4034e8ecf6f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39dc71832b8bb399ba20ce48f2427629524276766208427b4f7705d2c0d5a7bc
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112704664Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.112791033Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130623397Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.130669888Z" level=info msg="Adding pod default_busybox-7b57f96db7-wj4r5 to CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142401777Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-wj4r5 Namespace:default ID:6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f UID:90adda6e-a8af-41fd-880e-3820a76c660d NetNS:/var/run/netns/54f65633-04cf-4581-8596-83e8bb3b45c1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.142574298Z" level=info msg="Checking pod default_busybox-7b57f96db7-wj4r5 for CNI network kindnet (type=ptp)"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.143612429Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.144813443Z" level=info msg="Ran pod sandbox 6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f with infra container: default/busybox-7b57f96db7-wj4r5/POD" id=736d7d5c-e0a6-4add-85d8-01da4ad50ed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146339053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.146578417Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=b8619712-84fc-406a-a07d-46448e259e67 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.147237951Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.148635276Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:29:59 ha-671025 crio[943]: time="2025-09-17 00:29:59.991719699Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.350447433Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=4869ff93-ff5d-4c5f-bc8f-3cabe3c7db56 name=/runtime.v1.ImageService/PullImage
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.351203929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.352357885Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2f8c5eb2-d95f-4e4e-9638-5776fd3166b1 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.353373442Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.354669415Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=abfbef5f-c90d-4ad8-b2a8-4baf401fbd2d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.358933450Z" level=info msg="Creating container: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.359053527Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.435258478Z" level=info msg="Created container 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a: default/busybox-7b57f96db7-wj4r5/busybox" id=05a5a4c3-ddd6-4e31-bcd3-15fa6fbc19a8 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.436586730Z" level=info msg="Starting container: 7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a" id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:30:01 ha-671025 crio[943]: time="2025-09-17 00:30:01.446220694Z" level=info msg="Started container" PID=2585 containerID=7f97d1a1e175b51d7a889f9fe8b94ec1d245d9c3ad1f48bb929cc3544665036a description=default/busybox-7b57f96db7-wj4r5/busybox id=134529e8-d9b9-4298-b3e5-c73a5d72f6fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=6347f27b59723d9ed5d766202817f12864c3d029b677244c2214fe27b0e75f0f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f97d1a1e175b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   2 minutes ago       Running             busybox                   0                   6347f27b59723       busybox-7b57f96db7-wj4r5
	1b2322cca7366       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      3 minutes ago       Running             coredns                   0                   39dc71832b8bb       coredns-66bc5c9577-vfj56
	2f150c7f7dc18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       0                   f228c8ac21369       storage-provisioner
	4fd73d6446292       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      3 minutes ago       Running             coredns                   0                   92ca6f4389168       coredns-66bc5c9577-mqh24
	97d03ed4f05c2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      3 minutes ago       Running             kindnet-cni               0                   ad7fd40f66a01       kindnet-9zvhz
	beeb8e61abad9       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      3 minutes ago       Running             kube-proxy                0                   527193be2b767       kube-proxy-f58dt
	ecb56d4cc4c88       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     3 minutes ago       Running             kube-vip                  0                   852e4beaeede7       kube-vip-ha-671025
	7a41c39db49f4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      3 minutes ago       Running             kube-scheduler            0                   2a00cabb8a637       kube-scheduler-ha-671025
	d4e775bc05e92       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      3 minutes ago       Running             kube-apiserver            0                   e909c5565b688       kube-apiserver-ha-671025
	b966a80c48716       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      3 minutes ago       Running             kube-controller-manager   0                   9e2f63f3286f1       kube-controller-manager-ha-671025
	7819068a50e98       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      3 minutes ago       Running             etcd                      0                   985f7f1c3407d       etcd-ha-671025
	
	
	==> coredns [1b2322cca73664c31f8f758bee585a6b9e12f3a99cb34f8075ed9d4ba6a7424e] <==
	[INFO] 10.244.0.4:52527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231229s
	[INFO] 10.244.0.4:39416 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.0015558s
	[INFO] 10.244.0.4:45468 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.000706318s
	[INFO] 10.244.0.4:53485 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000087472s
	[INFO] 10.244.1.2:37939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156622s
	[INFO] 10.244.1.2:47463 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000147027s
	[INFO] 10.244.2.2:34151 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011555178s
	[INFO] 10.244.2.2:39096 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.081855349s
	[INFO] 10.244.2.2:40937 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000241541s
	[INFO] 10.244.0.4:56066 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205334s
	[INFO] 10.244.0.4:52703 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134531s
	[INFO] 10.244.0.4:56844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000105782s
	[INFO] 10.244.0.4:52436 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144945s
	[INFO] 10.244.1.2:42520 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154899s
	[INFO] 10.244.1.2:36438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000196498s
	[INFO] 10.244.2.2:42902 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170395s
	[INFO] 10.244.2.2:44897 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143905s
	[INFO] 10.244.0.4:59616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105243s
	[INFO] 10.244.1.2:39631 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002321s
	[INFO] 10.244.1.2:59007 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009976s
	[INFO] 10.244.2.2:53521 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000146002s
	[INFO] 10.244.2.2:56762 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164207s
	[INFO] 10.244.0.4:51093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145402s
	[INFO] 10.244.0.4:37880 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097925s
	[INFO] 10.244.1.2:55010 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144896s
	
	
	==> coredns [4fd73d6446292f190b136d89cd25bf39fce256818f5056f6d2665d5e4fa5ebd5] <==
	[INFO] 10.244.2.2:37478 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001401s
	[INFO] 10.244.0.4:32873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013759s
	[INFO] 10.244.0.4:37452 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006758446s
	[INFO] 10.244.0.4:53096 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156627s
	[INFO] 10.244.0.4:33933 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125115s
	[INFO] 10.244.1.2:46463 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000282565s
	[INFO] 10.244.1.2:39686 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00021884s
	[INFO] 10.244.1.2:54348 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01683783s
	[INFO] 10.244.1.2:54156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247643s
	[INFO] 10.244.1.2:51012 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248315s
	[INFO] 10.244.1.2:49586 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095306s
	[INFO] 10.244.2.2:42847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150928s
	[INFO] 10.244.2.2:38291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000461737s
	[INFO] 10.244.0.4:57992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127693s
	[INFO] 10.244.0.4:53956 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000219562s
	[INFO] 10.244.0.4:34480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117878s
	[INFO] 10.244.1.2:37372 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177692s
	[INFO] 10.244.1.2:44790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000227814s
	[INFO] 10.244.2.2:55057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193926s
	[INFO] 10.244.2.2:51005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158043s
	[INFO] 10.244.0.4:57976 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144447s
	[INFO] 10.244.0.4:45233 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113362s
	[INFO] 10.244.1.2:59399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116822s
	[INFO] 10.244.1.2:55814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000105565s
	[INFO] 10.244.1.2:33844 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129758s
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:31:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:27 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf085e2718b148b5ad91c414953b197e
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m33s
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m33s
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m39s
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m33s
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m43s (x8 over 3m43s)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m43s (x8 over 3m43s)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    3m43s (x8 over 3m43s)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s                  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m39s                  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s                  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                3m22s                  kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           3m4s                   node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           2m27s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           41s                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:32:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:31:19 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:31:19 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:31:19 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:31:19 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8c6142744954d91af4a5a05dad1716a
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m2s
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m57s              kube-proxy       
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           2m27s              node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	Name:               ha-671025-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:32:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:39 +0000   Wed, 17 Sep 2025 00:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-671025-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 660e9daa5dff498295dc0311dee374a4
	  System UUID:                ca019c4e-efee-45a1-854b-8ad90ea7fdf4
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dk9cf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 etcd-ha-671025-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m23s
	  kube-system                 kindnet-9w6f7                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-ha-671025-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-ha-671025-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-q96zd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-ha-671025-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-vip-ha-671025-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        2m22s  kube-proxy       
	  Normal  RegisteredNode  2m24s  node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  2m24s  node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  2m22s  node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode  41s    node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [7819068a50e981a28f7aac6e0ffa00b30498aa7a8728f90c252a1dde8a63172c] <==
	{"level":"warn","ts":"2025-09-17T00:31:15.412878Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.460077Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.560974Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.660125Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.662464Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.760381Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.841855Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.860567Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.888731Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.890789Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.942631Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:15.960321Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:16.060766Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:16.091742Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:16.160477Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:16.203556Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:16.260972Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:31:16.359998Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"info","ts":"2025-09-17T00:31:17.261186Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b65d66e84a12b94b","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-17T00:31:17.261235Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b65d66e84a12b94b"}
	{"level":"info","ts":"2025-09-17T00:31:17.261268Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b"}
	{"level":"info","ts":"2025-09-17T00:31:17.270894Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b65d66e84a12b94b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-17T00:31:17.271043Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b"}
	{"level":"info","ts":"2025-09-17T00:31:17.279150Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b"}
	{"level":"info","ts":"2025-09-17T00:31:17.279318Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b65d66e84a12b94b"}
	
	
	==> kernel <==
	 00:32:03 up  3:14,  0 users,  load average: 0.91, 0.63, 4.83
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [97d03ed4f05c2c8a7edb2014248bdbf3d9cfbee7da82980f69fec92e92471166] <==
	I0917 00:31:21.205042       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:31:31.203269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:31.203307       1 main.go:301] handling current node
	I0917 00:31:31.203327       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:31:31.203383       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:31:31.203623       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:31:31.203638       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:31:41.210517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:41.210556       1 main.go:301] handling current node
	I0917 00:31:41.210574       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:31:41.210578       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:31:41.210752       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:31:41.210762       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:31:51.212471       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:31:51.212511       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:31:51.212724       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:31:51.212736       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:31:51.212822       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:51.212830       1 main.go:301] handling current node
	I0917 00:32:01.212482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:32:01.212530       1 main.go:301] handling current node
	I0917 00:32:01.212551       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:32:01.212558       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:32:01.212766       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:32:01.212779       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [d4e775bc05e92406988cf96c77fa7e581cfe8cc2f3f70e1efc89c2ec23a63e4a] <==
	I0917 00:28:24.764710       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:28:29.928906       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:29.932824       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:28:30.328091       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0917 00:28:30.429040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:29:34.977143       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:29:44.951924       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0917 00:30:02.333807       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45142: use of closed network connection
	E0917 00:30:02.515957       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45160: use of closed network connection
	E0917 00:30:02.696738       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45172: use of closed network connection
	E0917 00:30:02.975357       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45188: use of closed network connection
	E0917 00:30:03.163201       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45206: use of closed network connection
	E0917 00:30:03.360510       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45214: use of closed network connection
	E0917 00:30:03.537260       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45238: use of closed network connection
	E0917 00:30:03.723220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45262: use of closed network connection
	E0917 00:30:03.899588       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45288: use of closed network connection
	E0917 00:30:04.199638       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45314: use of closed network connection
	E0917 00:30:04.375427       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45330: use of closed network connection
	E0917 00:30:04.546665       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45360: use of closed network connection
	E0917 00:30:04.718966       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45380: use of closed network connection
	E0917 00:30:04.893333       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45402: use of closed network connection
	E0917 00:30:05.069202       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:45414: use of closed network connection
	I0917 00:30:52.986088       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:31:02.474488       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0917 00:31:04.001528       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-controller-manager [b966a80c487167a8ef5e8ce7981e5a50b500e5d8ce6a71e00ed74b342da31465] <==
	I0917 00:28:29.324302       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:28:29.324327       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0917 00:28:29.324356       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:28:29.325297       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0917 00:28:29.325324       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0917 00:28:29.325364       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0917 00:28:29.325335       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:28:29.325427       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0917 00:28:29.326766       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:28:29.333261       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:28:29.333638       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:29.333657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:28:29.333665       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:28:29.340961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:28:29.343294       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:28:29.353739       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:28:44.313285       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0917 00:29:00.309163       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-g7wk8 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-g7wk8\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:00.997925       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m02\" does not exist"
	I0917 00:29:01.017089       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m02" podCIDRs=["10.244.1.0/24"]
	I0917 00:29:04.315749       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m02"
	E0917 00:29:37.100559       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-4vrlk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-4vrlk\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0917 00:29:38.581695       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-671025-m03\" does not exist"
	I0917 00:29:38.589924       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-671025-m03" podCIDRs=["10.244.2.0/24"]
	I0917 00:29:39.436557       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m03"
	
	
	==> kube-proxy [beeb8e61abad9cff9c53d8b6d7bd473fa1b23bbe18bf4739d34ffc8956376ff2] <==
	I0917 00:28:30.830323       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:28:30.891652       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:28:30.992026       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:28:30.992089       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:28:30.992227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:28:31.013108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:28:31.013179       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:28:31.018687       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:28:31.019218       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:28:31.019253       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:28:31.020737       1 config.go:200] "Starting service config controller"
	I0917 00:28:31.020764       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:28:31.020800       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:28:31.020809       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:28:31.020897       1 config.go:309] "Starting node config controller"
	I0917 00:28:31.020964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:28:31.021001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:28:31.021018       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:28:31.021055       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:28:31.121005       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:28:31.121031       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:28:31.121168       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a41c39db49f45380d579839f82d520984625d29f4dabaef0381390e6bdf676a] <==
	E0917 00:28:22.635845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:22.635883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:28:22.635646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:28:22.635968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:22.636038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:28:22.636058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:22.636404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:22.636428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:28:22.636582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:28:22.636623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:28:22.636965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:28:23.460819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0917 00:28:23.509027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:28:23.580561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:28:23.582654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:28:23.693685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0917 00:28:26.831507       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0917 00:29:01.061353       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:01.061564       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 138da6b8-9faf-407f-8647-78ecb92029f1(kube-system/kindnet-t9sbk) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	E0917 00:29:01.061607       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-t9sbk\": pod kindnet-t9sbk is already assigned to node \"ha-671025-m02\"" logger="UnhandledError" pod="kube-system/kindnet-t9sbk"
	I0917 00:29:01.062825       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-t9sbk" node="ha-671025-m02"
	E0917 00:29:38.625075       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	E0917 00:29:38.625173       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 9fe8a312-c296-4c84-9c30-5e578c24e82e(kube-system/kube-proxy-q96zd) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	E0917 00:29:38.625194       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q96zd\": pod kube-proxy-q96zd is already assigned to node \"ha-671025-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-q96zd"
	I0917 00:29:38.626798       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q96zd" node="ha-671025-m03"
	
	
	==> kubelet <==
	Sep 17 00:30:02 ha-671025 kubelet[1668]: E0917 00:30:02.515952    1668 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41316->127.0.0.1:37239: write tcp 127.0.0.1:41316->127.0.0.1:37239: write: broken pipe
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594113    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:04 ha-671025 kubelet[1668]: E0917 00:30:04.594155    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069004593825500  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595504    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:14 ha-671025 kubelet[1668]: E0917 00:30:14.595637    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069014595204257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597161    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:24 ha-671025 kubelet[1668]: E0917 00:30:24.597200    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069024596864722  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598240    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:34 ha-671025 kubelet[1668]: E0917 00:30:34.598284    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069034598011866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:44 ha-671025 kubelet[1668]: E0917 00:30:44.600122    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069044599859993  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:44 ha-671025 kubelet[1668]: E0917 00:30:44.600164    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069044599859993  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:54 ha-671025 kubelet[1668]: E0917 00:30:54.601918    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069054601658769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:30:54 ha-671025 kubelet[1668]: E0917 00:30:54.601958    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069054601658769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:04 ha-671025 kubelet[1668]: E0917 00:31:04.604079    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069064603787483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:04 ha-671025 kubelet[1668]: E0917 00:31:04.604118    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069064603787483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:14 ha-671025 kubelet[1668]: E0917 00:31:14.605770    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069074605478448  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:14 ha-671025 kubelet[1668]: E0917 00:31:14.605812    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069074605478448  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:24 ha-671025 kubelet[1668]: E0917 00:31:24.607915    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069084607646276  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:24 ha-671025 kubelet[1668]: E0917 00:31:24.607952    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069084607646276  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:34 ha-671025 kubelet[1668]: E0917 00:31:34.609296    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069094609049332  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:34 ha-671025 kubelet[1668]: E0917 00:31:34.609339    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069094609049332  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:44 ha-671025 kubelet[1668]: E0917 00:31:44.610636    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069104610377539  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:44 ha-671025 kubelet[1668]: E0917 00:31:44.610679    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069104610377539  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:54 ha-671025 kubelet[1668]: E0917 00:31:54.611885    1668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069114611662832  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:31:54 ha-671025 kubelet[1668]: E0917 00:31:54.611926    1668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069114611662832  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (461.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 stop --alsologtostderr -v 5
E0917 00:32:06.104971  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:32:47.066550  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 stop --alsologtostderr -v 5: (48.196541336s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 start --wait true --alsologtostderr -v 5
E0917 00:34:08.988928  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:35:14.435850  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:36:25.128383  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:36:52.831286  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 start --wait true --alsologtostderr -v 5: exit status 80 (6m50.506300602s)

                                                
                                                
-- stdout --
	* [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Enabled addons: 
	
	* Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-671025-m04" worker node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:32:53.048533  619438 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:32:53.048790  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.048798  619438 out.go:374] Setting ErrFile to fd 2...
	I0917 00:32:53.048801  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.049018  619438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:32:53.049513  619438 out.go:368] Setting JSON to false
	I0917 00:32:53.050516  619438 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11716,"bootTime":1758057457,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:32:53.050646  619438 start.go:140] virtualization: kvm guest
	I0917 00:32:53.052823  619438 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:32:53.054178  619438 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:32:53.054271  619438 notify.go:220] Checking for updates...
	I0917 00:32:53.056434  619438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:32:53.057686  619438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:32:53.058908  619438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:32:53.060062  619438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:32:53.061204  619438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:32:53.062799  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:53.062904  619438 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:32:53.089453  619438 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:32:53.089539  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.148341  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.138207862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.148496  619438 docker.go:318] overlay module found
	I0917 00:32:53.150179  619438 out.go:179] * Using the docker driver based on existing profile
	I0917 00:32:53.151230  619438 start.go:304] selected driver: docker
	I0917 00:32:53.151250  619438 start.go:918] validating driver "docker" against &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.151427  619438 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:32:53.151523  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.207764  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.197259177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.208608  619438 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:32:53.208644  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:53.208723  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:53.208799  619438 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.210881  619438 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:32:53.212367  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:32:53.213541  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:32:53.214652  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:53.214718  619438 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:32:53.214729  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:32:53.214774  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:32:53.214807  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:32:53.214815  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:32:53.214955  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.239640  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:32:53.239670  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:32:53.239694  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:32:53.239727  619438 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:32:53.239821  619438 start.go:364] duration metric: took 66.466µs to acquireMachinesLock for "ha-671025"
	I0917 00:32:53.239847  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:32:53.239857  619438 fix.go:54] fixHost starting: 
	I0917 00:32:53.240183  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.258645  619438 fix.go:112] recreateIfNeeded on ha-671025: state=Stopped err=<nil>
	W0917 00:32:53.258676  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:32:53.260365  619438 out.go:252] * Restarting existing docker container for "ha-671025" ...
	I0917 00:32:53.260462  619438 cli_runner.go:164] Run: docker start ha-671025
	I0917 00:32:53.507970  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.529432  619438 kic.go:430] container "ha-671025" state is running.
	I0917 00:32:53.530679  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:53.550608  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.550906  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:32:53.551014  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:53.571235  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:53.571518  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:53.571532  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:32:53.572179  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48548->127.0.0.1:33178: read: connection reset by peer
	I0917 00:32:56.710627  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.710663  619438 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:32:56.710724  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.729879  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.730123  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.730136  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:32:56.882161  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.882256  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.901113  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.901437  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.901465  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:32:57.039832  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:32:57.039868  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:32:57.039923  619438 ubuntu.go:190] setting up certificates
	I0917 00:32:57.039945  619438 provision.go:84] configureAuth start
	I0917 00:32:57.040038  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:57.059654  619438 provision.go:143] copyHostCerts
	I0917 00:32:57.059702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059734  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:32:57.059744  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059817  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:32:57.059920  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059938  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:32:57.059953  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059984  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:32:57.060042  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060059  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:32:57.060063  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060107  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:32:57.060165  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:32:57.261590  619438 provision.go:177] copyRemoteCerts
	I0917 00:32:57.261669  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:32:57.261706  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.282218  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.380298  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:32:57.380375  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:32:57.406100  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:32:57.406164  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:32:57.431902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:32:57.431973  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:32:57.458627  619438 provision.go:87] duration metric: took 418.658957ms to configureAuth
	I0917 00:32:57.458662  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:32:57.458871  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:57.458975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.477933  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:57.478176  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:57.478194  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:32:57.778279  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:32:57.778306  619438 machine.go:96] duration metric: took 4.227377039s to provisionDockerMachine
	I0917 00:32:57.778321  619438 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:32:57.778335  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:32:57.778405  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:32:57.778457  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.799370  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.898480  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:32:57.902232  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:32:57.902263  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:32:57.902270  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:32:57.902278  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:32:57.902290  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:32:57.902356  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:32:57.902449  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:32:57.902461  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:32:57.902551  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:32:57.912046  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:32:57.938010  619438 start.go:296] duration metric: took 159.669671ms for postStartSetup
	I0917 00:32:57.938093  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:57.938130  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.958300  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.051975  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:32:58.057124  619438 fix.go:56] duration metric: took 4.817259212s for fixHost
	I0917 00:32:58.057152  619438 start.go:83] releasing machines lock for "ha-671025", held for 4.817316777s
	I0917 00:32:58.057223  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:58.076270  619438 ssh_runner.go:195] Run: cat /version.json
	I0917 00:32:58.076324  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.076348  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:32:58.076443  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.096247  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.097159  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.262989  619438 ssh_runner.go:195] Run: systemctl --version
	I0917 00:32:58.267773  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:32:58.409261  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:32:58.414211  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.423687  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:32:58.423780  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.433966  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:32:58.434000  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:32:58.434033  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:32:58.434084  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:32:58.447559  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:32:58.460424  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:32:58.460531  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:32:58.474181  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:32:58.487071  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:32:58.555422  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:32:58.624823  619438 docker.go:234] disabling docker service ...
	I0917 00:32:58.624887  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:32:58.638410  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:32:58.650440  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:32:58.717056  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:32:58.784599  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:32:58.796601  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:32:58.814550  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:32:58.814628  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.825014  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:32:58.825076  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.835600  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.845903  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.856370  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:32:58.866050  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.876375  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.886563  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.896783  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:32:58.905534  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:32:58.914324  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:58.980288  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:32:59.086529  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:32:59.086607  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:32:59.090665  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:32:59.090717  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:32:59.094291  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:32:59.129626  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:32:59.129717  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.166530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.205640  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:32:59.206928  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:32:59.224561  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:32:59.228789  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.241758  619438 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:32:59.241920  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:59.241988  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.285898  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.285921  619438 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:32:59.285968  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.321059  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.321084  619438 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:32:59.321093  619438 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:32:59.321190  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:32:59.321250  619438 ssh_runner.go:195] Run: crio config
	I0917 00:32:59.369526  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:59.369549  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:59.369567  619438 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:32:59.369587  619438 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:32:59.369753  619438 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:32:59.369775  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:32:59.369814  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:32:59.383509  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:59.383620  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:32:59.383670  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:32:59.393067  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:32:59.393127  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:32:59.402584  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:32:59.422262  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:32:59.442170  619438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:32:59.461958  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:32:59.481675  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:32:59.485564  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.497547  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:59.561107  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:32:59.583877  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:32:59.583902  619438 certs.go:194] generating shared ca certs ...
	I0917 00:32:59.583919  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:32:59.584079  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:32:59.584130  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:32:59.584138  619438 certs.go:256] generating profile certs ...
	I0917 00:32:59.584206  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:32:59.584231  619438 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6
	I0917 00:32:59.584246  619438 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:33:00.130871  619438 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 ...
	I0917 00:33:00.130908  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6: {Name:mkf467d0f9030b6e7125c3be410cb9c880d64270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131088  619438 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 ...
	I0917 00:33:00.131108  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6: {Name:mk8b3c4ad94a18f1741ce8fdbeceb16bceee6f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131220  619438 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:33:00.131404  619438 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:33:00.131601  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:00.131625  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:00.131643  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:00.131658  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:00.131673  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:00.131687  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:00.131702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:00.131714  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:00.131729  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:00.131788  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:00.131823  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:00.131830  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:00.131857  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:00.131878  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:00.131897  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:00.131942  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:00.131980  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.132001  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.132015  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.132585  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:00.165089  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:00.198657  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:00.239751  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:00.280419  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:00.317099  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:00.355265  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:00.390225  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:00.418200  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:00.443790  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:00.469778  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:00.495605  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:33:00.516723  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:00.522849  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:00.533838  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538041  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538112  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.545733  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:00.555787  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:00.566338  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570140  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570203  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.577687  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:00.587720  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:00.599252  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603349  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603456  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.611701  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:00.622604  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:00.626359  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:00.633232  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:00.640671  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:00.647926  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:00.655266  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:00.662987  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:00.670413  619438 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:33:00.670534  619438 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:33:00.670583  619438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:33:00.712724  619438 cri.go:89] found id: "dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c"
	I0917 00:33:00.712747  619438 cri.go:89] found id: "c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3"
	I0917 00:33:00.712751  619438 cri.go:89] found id: "3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da"
	I0917 00:33:00.712754  619438 cri.go:89] found id: "3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49"
	I0917 00:33:00.712757  619438 cri.go:89] found id: "feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15"
	I0917 00:33:00.712761  619438 cri.go:89] found id: ""
	I0917 00:33:00.712805  619438 ssh_runner.go:195] Run: sudo runc list -f json
	I0917 00:33:00.733477  619438 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","pid":805,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49/userdata","rootfs":"/var/lib/containers/storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","created":"2025-09-17T00:33:00.224803069Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.170354801Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7817082b8b3b4ebaac6b1c6cc40fe3e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-671025_a7817082b8b3b4ebaac6b1c6cc40fe3e/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/
storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a781708
2b8b3b4ebaac6b1c6cc40fe3e/containers/kube-vip/367d19bd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.hash":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.seen":"2025-09-17T00:32:59.669171997Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","pid":880,"status":"running","bundle":"/run/containers/
storage/overlay-containers/3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da/userdata","rootfs":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","created":"2025-09-17T00:33:00.275833142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePa
th\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.202504428Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b5ccb738eb1160dc60c2973028d04964\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-671025_b5ccb738eb1160dc60c2973028d04964/kube-apiserver/1.log","io.kuberne
tes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/containers/kube-ap
iserver/6df491f2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":f
alse}]","io.kubernetes.pod.name":"kube-apiserver-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b5ccb738eb1160dc60c2973028d04964","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b5ccb738eb1160dc60c2973028d04964","kubernetes.io/config.seen":"2025-09-17T00:32:59.669167256Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","pid":894,"status":"running","bundle":"/run/containers/storage/overlay-containers/c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3/userdata","rootfs":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a
9bffec85a2a35b5e8e008790d2da1/merged","created":"2025-09-17T00:33:00.274952825Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID"
:"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.203434002Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"74a9cbd6392d4b9acfdd053de2761cb8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-671025_74a9cbd6392d4b9acfdd053de2761cb8/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a9bffec85
a2a35b5e8e008790d2da1/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/containers/kube
-scheduler/513703c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.hash":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.seen":"2025-09-17T00:32:59.669170685Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","pid":914,"status":"running","bundle":"/run/containers/storage/overlay-contai
ners/dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c/userdata","rootfs":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","created":"2025-09-17T00:33:00.286793858Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/d
ev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.204654096Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8d1e0f98935496199c8e8278a2410d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-671025_8d1e0f98935496199c8e8278a2410d09/kube-c
ontroller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/
etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/containers/kube-controller-manager/7587fc8c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"ho
st_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.hash":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.seen":"2025-09-17T00:32:59.669169006Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.system
d.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","pid":809,"status":"running","bundle":"/run/containers/storage/overlay-containers/feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15/userdata","rootfs":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","created":"2025-09-17T00:33:00.227524758Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\
\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.156861142Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"629bf94aa
8286a4aae957269fae7c79b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-671025_629bf94aa8286a4aae957269fae7c79b/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\
":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/containers/etcd/188c438f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"629bf94aa8286a4aae957269fae7c79b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"629bf94aa8286a4aae957269fae7c79b",
"kubernetes.io/config.seen":"2025-09-17T00:32:59.669161890Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0917 00:33:00.733792  619438 cri.go:126] list returned 5 containers
	I0917 00:33:00.733811  619438 cri.go:129] container: {ID:3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 Status:running}
	I0917 00:33:00.733830  619438 cri.go:135] skipping {3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 running}: state = "running", want "paused"
	I0917 00:33:00.733846  619438 cri.go:129] container: {ID:3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da Status:running}
	I0917 00:33:00.733857  619438 cri.go:135] skipping {3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da running}: state = "running", want "paused"
	I0917 00:33:00.733867  619438 cri.go:129] container: {ID:c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 Status:running}
	I0917 00:33:00.733875  619438 cri.go:135] skipping {c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 running}: state = "running", want "paused"
	I0917 00:33:00.733884  619438 cri.go:129] container: {ID:dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c Status:running}
	I0917 00:33:00.733891  619438 cri.go:135] skipping {dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c running}: state = "running", want "paused"
	I0917 00:33:00.733906  619438 cri.go:129] container: {ID:feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 Status:running}
	I0917 00:33:00.733915  619438 cri.go:135] skipping {feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 running}: state = "running", want "paused"
	I0917 00:33:00.733967  619438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:33:00.743818  619438 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:33:00.743842  619438 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:33:00.743896  619438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:33:00.753049  619438 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:00.753478  619438 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671025" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.753570  619438 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-517646/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671025" cluster setting kubeconfig missing "ha-671025" context setting]
	I0917 00:33:00.753860  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.754368  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:33:00.754887  619438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:33:00.754902  619438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:33:00.754906  619438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:33:00.754911  619438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:33:00.754914  619438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:33:00.754984  619438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:33:00.755286  619438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:33:00.764691  619438 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:33:00.764721  619438 kubeadm.go:593] duration metric: took 20.872209ms to restartPrimaryControlPlane
	I0917 00:33:00.764732  619438 kubeadm.go:394] duration metric: took 94.344936ms to StartCluster
	I0917 00:33:00.764754  619438 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.764829  619438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.765434  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.765678  619438 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:00.765703  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:00.765712  619438 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:33:00.765954  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.768475  619438 out.go:179] * Enabled addons: 
	I0917 00:33:00.769396  619438 addons.go:514] duration metric: took 3.672053ms for enable addons: enabled=[]
	I0917 00:33:00.769427  619438 start.go:246] waiting for cluster config update ...
	I0917 00:33:00.769435  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:00.770640  619438 out.go:203] 
	I0917 00:33:00.771782  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.771882  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.773295  619438 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:33:00.774266  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:00.775272  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:00.776246  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:00.776270  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:00.776303  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:00.776369  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:00.776383  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:00.776522  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.798181  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:00.798201  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:00.798221  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:00.798259  619438 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:00.798335  619438 start.go:364] duration metric: took 52.828µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:33:00.798366  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:00.798404  619438 fix.go:54] fixHost starting: m02
	I0917 00:33:00.798630  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:00.816952  619438 fix.go:112] recreateIfNeeded on ha-671025-m02: state=Stopped err=<nil>
	W0917 00:33:00.816988  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:00.818588  619438 out.go:252] * Restarting existing docker container for "ha-671025-m02" ...
	I0917 00:33:00.818663  619438 cli_runner.go:164] Run: docker start ha-671025-m02
	I0917 00:33:01.089289  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:01.112171  619438 kic.go:430] container "ha-671025-m02" state is running.
	I0917 00:33:01.112607  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:01.134692  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:01.134992  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:01.135064  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:01.156210  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:01.156564  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:01.156582  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:01.157427  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34164->127.0.0.1:33183: read: connection reset by peer
	I0917 00:33:04.296769  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.296809  619438 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:33:04.296905  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.315073  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.315310  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.315323  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:33:04.466025  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.466110  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.484268  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.484535  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.484554  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:04.621439  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:04.621482  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:04.621501  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:04.621511  619438 provision.go:84] configureAuth start
	I0917 00:33:04.621573  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:04.640283  619438 provision.go:143] copyHostCerts
	I0917 00:33:04.640335  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640368  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:04.640383  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640480  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:04.640601  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640634  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:04.640652  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640698  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:04.640784  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640809  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:04.640818  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640852  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:04.640942  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:33:04.733693  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:04.733759  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:04.733809  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.752499  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:04.850462  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:04.850518  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:04.876387  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:04.876625  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:04.904017  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:04.904091  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:04.932067  619438 provision.go:87] duration metric: took 310.54132ms to configureAuth
	I0917 00:33:04.932114  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:04.932333  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:04.932519  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.950911  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.951173  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.951192  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:13.583717  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:13.583742  619438 machine.go:96] duration metric: took 12.448736712s to provisionDockerMachine
	I0917 00:33:13.583754  619438 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:33:13.583768  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:13.583844  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:13.583889  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.602374  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.704271  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:13.709862  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:13.709910  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:13.709921  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:13.709930  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:13.709945  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:13.710027  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:13.710128  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:13.710138  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:13.710258  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:13.726542  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:13.762021  619438 start.go:296] duration metric: took 178.248287ms for postStartSetup
	I0917 00:33:13.762146  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:13.762202  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.785807  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.885926  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:13.890781  619438 fix.go:56] duration metric: took 13.092394555s for fixHost
	I0917 00:33:13.890814  619438 start.go:83] releasing machines lock for "ha-671025-m02", held for 13.092464098s
	I0917 00:33:13.890888  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:13.912194  619438 out.go:179] * Found network options:
	I0917 00:33:13.913617  619438 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:33:13.914820  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:13.914864  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:13.914934  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:13.914975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.915050  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:13.915121  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.935804  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.936030  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:14.188511  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:14.195453  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.211117  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:14.211201  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.227642  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:14.227708  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:14.227849  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:14.227922  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:14.251293  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:14.271238  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:14.271313  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:14.288904  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:14.307961  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:14.437900  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:14.545190  619438 docker.go:234] disabling docker service ...
	I0917 00:33:14.545281  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:14.560872  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:14.573584  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:14.680197  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:14.811100  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:14.825885  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:14.847059  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:14.847127  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.859808  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:14.859899  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.871797  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.883328  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.896664  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:14.907675  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.918906  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.929358  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.941273  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:14.953043  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:14.967648  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:15.083218  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:21.777437  619438 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.694178293s)
	I0917 00:33:21.777485  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:21.777539  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:21.781615  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:21.781681  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:21.785837  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:21.828119  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:21.828217  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.874252  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.916319  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:21.917788  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:21.918929  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:21.938354  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:21.942655  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:21.956120  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:21.956460  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:21.956800  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:21.976493  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:21.976752  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:33:21.976765  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:21.976779  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:21.976919  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:21.976970  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:21.976980  619438 certs.go:256] generating profile certs ...
	I0917 00:33:21.977105  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:21.977160  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.289f7349
	I0917 00:33:21.977201  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:21.977214  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:21.977226  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:21.977238  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:21.977248  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:21.977263  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:21.977277  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:21.977292  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:21.977304  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:21.977348  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:21.977374  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:21.977384  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:21.977437  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:21.977468  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:21.977488  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:21.977537  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:21.977566  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:21.977579  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:21.977591  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:21.977641  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:21.996033  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:22.086756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:22.091430  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:22.105578  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:22.109474  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:22.123413  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:22.127015  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:22.140675  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:22.145374  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:22.160202  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:22.164648  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:22.179040  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:22.182820  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:22.197252  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:22.226621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:22.255420  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:22.284497  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:22.313100  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:22.339570  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:22.368270  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:22.395836  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:22.424911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:22.451321  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:22.479698  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:22.509017  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:22.530192  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:22.550277  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:22.570982  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:22.591763  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:22.615610  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:22.637548  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:22.660728  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:22.668525  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:22.679921  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684865  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684929  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.692513  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:22.703651  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:22.716758  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721573  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721639  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.729408  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:22.740799  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:22.754481  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759515  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759591  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.769873  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:22.780940  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:22.785123  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:22.792739  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:22.800305  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:22.808094  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:22.815985  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:22.823772  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:22.830968  619438 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:33:22.831108  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:22.831135  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:22.831174  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:22.845445  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:22.845549  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:22.845617  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:22.856831  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:22.856928  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:22.867889  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:22.888469  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:22.908498  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:22.929249  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:22.933575  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:22.945785  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.049186  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.063035  619438 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:23.063337  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.065109  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:23.066721  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.162455  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.176145  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:23.176215  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:23.176479  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185303  619438 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:33:23.185333  619438 node_ready.go:38] duration metric: took 8.819618ms for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185350  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:23.185420  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:23.197637  619438 api_server.go:72] duration metric: took 134.535244ms to wait for apiserver process to appear ...
	I0917 00:33:23.197672  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:23.197693  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:23.202879  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:23.204114  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:23.204224  619438 api_server.go:131] duration metric: took 6.534103ms to wait for apiserver health ...
	I0917 00:33:23.204244  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:23.211681  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:23.211742  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211758  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211769  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.211777  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.211783  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.211792  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.211798  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.211807  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.211816  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.211822  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.211829  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.211836  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.211844  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.211850  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.211859  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.211867  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.211875  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.211881  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.211888  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.211896  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.211902  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.211907  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.211913  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.211919  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.211928  619438 system_pods.go:74] duration metric: took 7.670911ms to wait for pod list to return data ...
	I0917 00:33:23.211942  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:23.215282  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:23.215305  619438 default_sa.go:55] duration metric: took 3.354164ms for default service account to be created ...
	I0917 00:33:23.215314  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:23.220686  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:23.220721  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220730  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220737  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.220741  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.220745  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.220750  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.220753  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.220759  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.220763  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.220768  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.220771  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.220774  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.220778  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.220782  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.220786  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.220790  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.220793  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.220796  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.220800  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.220803  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.220806  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.220808  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.220812  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.220816  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.220822  619438 system_pods.go:126] duration metric: took 5.503704ms to wait for k8s-apps to be running ...
	I0917 00:33:23.220830  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:23.220878  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:23.233344  619438 system_svc.go:56] duration metric: took 12.501522ms WaitForService to wait for kubelet
	I0917 00:33:23.233378  619438 kubeadm.go:578] duration metric: took 170.282ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:23.233426  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:23.237203  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237235  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237249  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237253  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237258  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237263  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237268  619438 node_conditions.go:105] duration metric: took 3.836923ms to run NodePressure ...
	I0917 00:33:23.237281  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:23.237310  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:23.239362  619438 out.go:203] 
	I0917 00:33:23.240662  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.240787  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.242255  619438 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:33:23.243650  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:23.244785  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:23.245985  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:23.246015  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:23.246076  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:23.246103  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:23.246111  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:23.246237  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.267677  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:23.267698  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:23.267719  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:23.267746  619438 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:23.267801  619438 start.go:364] duration metric: took 38.266µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:33:23.267818  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:23.267825  619438 fix.go:54] fixHost starting: m03
	I0917 00:33:23.268049  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.286470  619438 fix.go:112] recreateIfNeeded on ha-671025-m03: state=Stopped err=<nil>
	W0917 00:33:23.286501  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:23.288337  619438 out.go:252] * Restarting existing docker container for "ha-671025-m03" ...
	I0917 00:33:23.288444  619438 cli_runner.go:164] Run: docker start ha-671025-m03
	I0917 00:33:23.539232  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.559852  619438 kic.go:430] container "ha-671025-m03" state is running.
	I0917 00:33:23.560281  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:23.582181  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.582448  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:23.582512  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:23.603240  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:23.603508  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:23.603524  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:23.604268  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54628->127.0.0.1:33188: read: connection reset by peer
	I0917 00:33:26.756053  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.756095  619438 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:33:26.756163  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.775553  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.775816  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.775832  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:33:26.929724  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.929811  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.948952  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.949181  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.949199  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:27.097686  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:27.097724  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:27.097808  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:27.097838  619438 provision.go:84] configureAuth start
	I0917 00:33:27.097905  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:27.124607  619438 provision.go:143] copyHostCerts
	I0917 00:33:27.124661  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124704  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:27.124712  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124796  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:27.124902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124927  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:27.124938  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124978  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:27.125071  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125093  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:27.125097  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125123  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:27.125202  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:33:27.491028  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:27.491103  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:27.491153  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.510894  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:27.621913  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:27.621991  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:27.659332  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:27.659436  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:27.694265  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:27.694331  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:27.729012  619438 provision.go:87] duration metric: took 631.150589ms to configureAuth
	I0917 00:33:27.729044  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:27.729332  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:27.729498  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.752375  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:27.752667  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:27.752694  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:28.163571  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:28.163606  619438 machine.go:96] duration metric: took 4.581141061s to provisionDockerMachine
	I0917 00:33:28.163625  619438 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:33:28.163636  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:28.163694  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:28.163736  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.183221  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.282370  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:28.286033  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:28.286069  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:28.286080  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:28.286089  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:28.286103  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:28.286167  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:28.286260  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:28.286273  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:28.286385  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:28.296210  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:28.323607  619438 start.go:296] duration metric: took 159.96344ms for postStartSetup
	I0917 00:33:28.323744  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:28.323801  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.341948  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.437100  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:28.442217  619438 fix.go:56] duration metric: took 5.174381535s for fixHost
	I0917 00:33:28.442251  619438 start.go:83] releasing machines lock for "ha-671025-m03", held for 5.17444003s
	I0917 00:33:28.442339  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:28.462490  619438 out.go:179] * Found network options:
	I0917 00:33:28.463995  619438 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:33:28.465339  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465379  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465437  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465456  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:28.465540  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:28.465604  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.465608  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:28.465666  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.484618  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.484954  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.729938  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:28.735367  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.746253  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:28.746345  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.757317  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:28.757344  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:28.757382  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:28.757457  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:28.772308  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:28.784900  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:28.784967  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:28.800003  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:28.812730  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:28.927855  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:29.059441  619438 docker.go:234] disabling docker service ...
	I0917 00:33:29.059519  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:29.078537  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:29.093278  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:29.210953  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:29.324946  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:29.337107  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:29.355136  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:29.355186  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.366142  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:29.366211  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.378355  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.389105  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.399699  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:29.409712  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.420697  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.430508  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.440921  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:29.450466  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:29.459577  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:29.574875  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:29.816990  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:29.817095  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:29.821723  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:29.821780  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:29.825613  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:29.861449  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:29.861530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.917974  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.959407  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:29.960768  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:29.962037  619438 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:33:29.963347  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:29.990529  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:29.995062  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.007594  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:30.007810  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:30.008007  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:30.028172  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:30.028488  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:33:30.028502  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:30.028518  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:30.028667  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:30.028724  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:30.028738  619438 certs.go:256] generating profile certs ...
	I0917 00:33:30.028835  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:30.028918  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:33:30.028969  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:30.028985  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:30.029006  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:30.029022  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:30.029039  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:30.029053  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:30.029066  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:30.029085  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:30.029109  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:30.029181  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:30.029228  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:30.029241  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:30.029285  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:30.029320  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:30.029350  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:30.029418  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:30.029458  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.029480  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.029497  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.029570  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:30.048859  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:30.137756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:30.142385  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:30.157058  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:30.161473  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:30.176759  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:30.180509  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:30.193674  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:30.197197  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:30.210232  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:30.214138  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:30.227500  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:30.231351  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:30.244274  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:30.271911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:30.299112  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:30.326476  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:30.352993  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:30.380621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:30.406324  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:30.432139  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:30.458308  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:30.483817  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:30.509827  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:30.537659  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:30.557593  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:30.577579  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:30.597023  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:30.617353  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:30.636531  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:30.656268  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:30.676462  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:30.682486  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:30.693023  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696932  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696986  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.704184  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:30.714256  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:30.725254  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.728941  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.729013  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.736673  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:30.746358  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:30.757231  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761269  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761351  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.768689  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:30.779054  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:30.783069  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:30.790436  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:30.797491  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:30.804684  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:30.811602  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:30.818603  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:30.825614  619438 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:33:30.825731  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:30.825755  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:30.825793  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:30.839517  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:30.839587  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:30.839637  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:30.849197  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:30.849283  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:30.859805  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:30.879168  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:30.898461  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:30.918131  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:30.922054  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.934606  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.047135  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.060828  619438 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:31.061141  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.063169  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:31.064429  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.179306  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.194472  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:31.194609  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:31.194890  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198458  619438 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:33:31.198488  619438 node_ready.go:38] duration metric: took 3.579476ms for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198503  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:31.198550  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:31.212138  619438 api_server.go:72] duration metric: took 151.254038ms to wait for apiserver process to appear ...
	I0917 00:33:31.212172  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:31.212199  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:31.217814  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:31.218774  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:31.218795  619438 api_server.go:131] duration metric: took 6.616763ms to wait for apiserver health ...
	I0917 00:33:31.218803  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:31.225098  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:31.225134  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225141  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225149  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.225155  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.225163  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.225168  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.225177  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.225185  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.225190  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.225199  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.225205  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.225209  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.225213  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.225219  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.225225  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.225228  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.225231  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.225235  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.225237  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.225242  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.225247  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.225250  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.225253  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.225255  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.225261  619438 system_pods.go:74] duration metric: took 6.452715ms to wait for pod list to return data ...
	I0917 00:33:31.225280  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:31.228376  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:31.228411  619438 default_sa.go:55] duration metric: took 3.119992ms for default service account to be created ...
	I0917 00:33:31.228422  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:31.233445  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:31.233478  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233487  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233491  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.233495  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.233501  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.233504  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.233508  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.233511  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.233517  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.233523  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.233529  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.233535  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.233540  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.233548  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.233555  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.233559  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.233566  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.233570  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.233576  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.233581  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.233587  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.233590  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.233596  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.233599  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.233605  619438 system_pods.go:126] duration metric: took 5.178303ms to wait for k8s-apps to be running ...
	I0917 00:33:31.233615  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:31.233661  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:31.246667  619438 system_svc.go:56] duration metric: took 13.0386ms WaitForService to wait for kubelet
	I0917 00:33:31.246701  619438 kubeadm.go:578] duration metric: took 185.824043ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:31.246730  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:31.250636  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250665  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250679  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250684  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250690  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250694  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250700  619438 node_conditions.go:105] duration metric: took 3.96358ms to run NodePressure ...
	I0917 00:33:31.250716  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:31.250743  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:31.253191  619438 out.go:203] 
	I0917 00:33:31.255560  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.255716  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.257849  619438 out.go:179] * Starting "ha-671025-m04" worker node in "ha-671025" cluster
	I0917 00:33:31.259401  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:31.260716  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:31.262230  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:31.262264  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:31.262330  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:31.262386  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:31.262432  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:31.262581  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.285684  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:31.285706  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:31.285722  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:31.285751  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:31.285824  619438 start.go:364] duration metric: took 55.532µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:33:31.285843  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:31.285851  619438 fix.go:54] fixHost starting: m04
	I0917 00:33:31.286063  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.305028  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:33:31.305061  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:31.307579  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:33:31.307671  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:33:31.575879  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.595646  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:33:31.596093  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:33:31.616747  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.617092  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:31.617170  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:33:31.636573  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:31.636791  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0917 00:33:31.636802  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:31.637630  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36226->127.0.0.1:33193: read: connection reset by peer
	I0917 00:33:34.638709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:37.640910  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:40.643532  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:43.644441  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:46.646832  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:49.647727  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:52.649735  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:55.650690  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:58.651030  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:01.651344  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:04.652841  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:07.653174  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:10.655161  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:13.656284  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:16.658064  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:19.658720  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:22.660831  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:25.661743  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:28.662460  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:31.663366  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:34.664358  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:37.666715  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:40.668752  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:43.669135  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:46.670730  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:49.671672  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:52.673038  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:55.674872  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:58.675353  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:01.676728  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:04.677624  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:07.680078  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:10.681718  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:13.682700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:16.684701  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:19.686235  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:22.687651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:25.689778  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:28.690485  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:31.691549  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:34.692838  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:37.695306  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:40.697845  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:43.698429  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:46.700789  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:49.701639  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:52.702370  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:55.704673  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:58.705496  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:01.706733  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:04.708175  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:07.709697  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:10.712190  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:13.713347  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:16.715721  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:19.716893  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:22.718572  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:25.720700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:28.721777  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:31.722479  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:36:31.722518  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:36:31.722607  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.744520  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.744620  619438 machine.go:96] duration metric: took 3m0.127509973s to provisionDockerMachine
	I0917 00:36:31.744723  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:36:31.744770  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.764601  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.764736  619438 retry.go:31] will retry after 288.945807ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.054420  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.074595  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.074728  619438 retry.go:31] will retry after 272.369407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.348309  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.368462  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.368608  619438 retry.go:31] will retry after 744.516266ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.113868  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.133032  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.133163  619438 retry.go:31] will retry after 492.951246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.626619  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.647357  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:33.647505  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:33.647528  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.647587  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:36:33.647631  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.666215  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.666338  619438 retry.go:31] will retry after 272.675779ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.939657  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.958470  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.958588  619438 retry.go:31] will retry after 525.446207ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:34.484331  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:34.504346  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:34.504492  619438 retry.go:31] will retry after 588.594219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.093370  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:35.116893  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:35.117042  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117086  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117113  619438 fix.go:56] duration metric: took 3m3.831261756s for fixHost
	I0917 00:36:35.117126  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.831291336s
	W0917 00:36:35.117142  619438 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117240  619438 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117254  619438 start.go:729] Will try again in 5 seconds ...
	I0917 00:36:40.118524  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:36:40.118656  619438 start.go:364] duration metric: took 88.188µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:36:40.118689  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:36:40.118698  619438 fix.go:54] fixHost starting: m04
	I0917 00:36:40.119106  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.139538  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:36:40.139579  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:36:40.141549  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:36:40.141624  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:36:40.412862  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.433322  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:36:40.433799  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:36:40.453513  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:36:40.453934  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:36:40.454059  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:36:40.473978  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:40.474315  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0917 00:36:40.474331  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:36:40.475099  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33606->127.0.0.1:33198: read: connection reset by peer
	I0917 00:36:43.475724  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:46.476660  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:49.478345  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:52.479547  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:55.482132  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:58.483337  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:01.484607  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:04.485839  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:07.487714  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:10.489661  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:13.490227  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:16.492090  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:19.492645  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:22.493651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:25.495677  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:28.496275  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:31.497224  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:34.497736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:37.499709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:40.502218  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:43.502692  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:46.504930  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:49.506113  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:52.506643  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:55.507569  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:58.507989  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:01.508674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:04.509297  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:07.511674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:10.512110  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:13.512683  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:16.515058  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:19.516277  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:22.517225  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:25.519308  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:28.519717  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:31.520615  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:34.522114  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:37.523670  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:40.526331  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:43.527374  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:46.529741  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:49.531301  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:52.532585  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:55.533793  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:58.534231  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:01.534621  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:04.536103  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:07.538458  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:10.540484  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:13.541711  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:16.543992  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:19.545340  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:22.546576  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:25.548676  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:28.549734  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:31.550736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:34.551691  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:37.553774  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:40.555606  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:39:40.555645  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:39:40.555731  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.576194  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.576295  619438 machine.go:96] duration metric: took 3m0.122321612s to provisionDockerMachine
	I0917 00:39:40.576379  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:39:40.576440  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.595844  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.595977  619438 retry.go:31] will retry after 334.138339ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:40.931319  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.951370  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.951504  619438 retry.go:31] will retry after 347.147392ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.299070  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.319717  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.319850  619438 retry.go:31] will retry after 612.672267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.933618  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.954663  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:41.954778  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:41.954797  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.954845  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:39:41.954878  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.975511  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.975621  619438 retry.go:31] will retry after 279.089961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.255093  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.275630  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.275759  619438 retry.go:31] will retry after 427.799265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.704460  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.723085  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.723291  619438 retry.go:31] will retry after 748.226264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.472625  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:43.493097  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:43.493238  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.493260  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.493279  619438 fix.go:56] duration metric: took 3m3.3745821s for fixHost
	I0917 00:39:43.493294  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.374622198s
	W0917 00:39:43.493451  619438 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-671025" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p ha-671025" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.495244  619438 out.go:203] 
	W0917 00:39:43.496536  619438 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.496558  619438 out.go:285] * 
	* 
	W0917 00:39:43.498254  619438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:39:43.499426  619438 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-671025 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 node list --alsologtostderr -v 5
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-671025	192.168.49.2
ha-671025-m02	192.168.49.3
ha-671025-m03	192.168.49.4
ha-671025-m04	

                                                
                                                
After restart: ha-671025	192.168.49.2
ha-671025-m02	192.168.49.3
ha-671025-m03	192.168.49.4
ha-671025-m04	192.168.49.5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 619633,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:32:53.286176868Z",
	            "FinishedAt": "2025-09-17T00:32:52.645586403Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e88ab0b1cbcc741c291833bfdeaa68e46e3b5db9345dc0aa90d473d7f1955a0",
	            "SandboxKey": "/var/run/docker/netns/3e88ab0b1cbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:78:32:58:80:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "62110bd5e439ab2c08160ae7846f5c9267265e2e870f01c3985d76fb403512f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.286666446s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt                                                            │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ node    │ ha-671025 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node start m02 --alsologtostderr -v 5                                                                                     │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:31 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ stop    │ ha-671025 stop --alsologtostderr -v 5                                                                                               │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │ 17 Sep 25 00:32 UTC │
	│ start   │ ha-671025 start --wait true --alsologtostderr -v 5                                                                                  │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:32:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:32:53.048533  619438 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:32:53.048790  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.048798  619438 out.go:374] Setting ErrFile to fd 2...
	I0917 00:32:53.048801  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.049018  619438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:32:53.049513  619438 out.go:368] Setting JSON to false
	I0917 00:32:53.050516  619438 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11716,"bootTime":1758057457,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:32:53.050646  619438 start.go:140] virtualization: kvm guest
	I0917 00:32:53.052823  619438 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:32:53.054178  619438 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:32:53.054271  619438 notify.go:220] Checking for updates...
	I0917 00:32:53.056434  619438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:32:53.057686  619438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:32:53.058908  619438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:32:53.060062  619438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:32:53.061204  619438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:32:53.062799  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:53.062904  619438 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:32:53.089453  619438 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:32:53.089539  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.148341  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.138207862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.148496  619438 docker.go:318] overlay module found
	I0917 00:32:53.150179  619438 out.go:179] * Using the docker driver based on existing profile
	I0917 00:32:53.151230  619438 start.go:304] selected driver: docker
	I0917 00:32:53.151250  619438 start.go:918] validating driver "docker" against &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.151427  619438 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:32:53.151523  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.207764  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.197259177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.208608  619438 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:32:53.208644  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:53.208723  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:53.208799  619438 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.210881  619438 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:32:53.212367  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:32:53.213541  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:32:53.214652  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:53.214718  619438 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:32:53.214729  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:32:53.214774  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:32:53.214807  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:32:53.214815  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:32:53.214955  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.239640  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:32:53.239670  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:32:53.239694  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:32:53.239727  619438 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:32:53.239821  619438 start.go:364] duration metric: took 66.466µs to acquireMachinesLock for "ha-671025"
	I0917 00:32:53.239847  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:32:53.239857  619438 fix.go:54] fixHost starting: 
	I0917 00:32:53.240183  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.258645  619438 fix.go:112] recreateIfNeeded on ha-671025: state=Stopped err=<nil>
	W0917 00:32:53.258676  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:32:53.260365  619438 out.go:252] * Restarting existing docker container for "ha-671025" ...
	I0917 00:32:53.260462  619438 cli_runner.go:164] Run: docker start ha-671025
	I0917 00:32:53.507970  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.529432  619438 kic.go:430] container "ha-671025" state is running.
	I0917 00:32:53.530679  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:53.550608  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.550906  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:32:53.551014  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:53.571235  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:53.571518  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:53.571532  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:32:53.572179  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48548->127.0.0.1:33178: read: connection reset by peer
	I0917 00:32:56.710627  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.710663  619438 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:32:56.710724  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.729879  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.730123  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.730136  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:32:56.882161  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.882256  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.901113  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.901437  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.901465  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:32:57.039832  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:32:57.039868  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:32:57.039923  619438 ubuntu.go:190] setting up certificates
	I0917 00:32:57.039945  619438 provision.go:84] configureAuth start
	I0917 00:32:57.040038  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:57.059654  619438 provision.go:143] copyHostCerts
	I0917 00:32:57.059702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059734  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:32:57.059744  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059817  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:32:57.059920  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059938  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:32:57.059953  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059984  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:32:57.060042  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060059  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:32:57.060063  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060107  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:32:57.060165  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:32:57.261590  619438 provision.go:177] copyRemoteCerts
	I0917 00:32:57.261669  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:32:57.261706  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.282218  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.380298  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:32:57.380375  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:32:57.406100  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:32:57.406164  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:32:57.431902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:32:57.431973  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:32:57.458627  619438 provision.go:87] duration metric: took 418.658957ms to configureAuth
	I0917 00:32:57.458662  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:32:57.458871  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:57.458975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.477933  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:57.478176  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:57.478194  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:32:57.778279  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:32:57.778306  619438 machine.go:96] duration metric: took 4.227377039s to provisionDockerMachine
	I0917 00:32:57.778321  619438 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:32:57.778335  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:32:57.778405  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:32:57.778457  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.799370  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.898480  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:32:57.902232  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:32:57.902263  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:32:57.902270  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:32:57.902278  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:32:57.902290  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:32:57.902356  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:32:57.902449  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:32:57.902461  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:32:57.902551  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:32:57.912046  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:32:57.938010  619438 start.go:296] duration metric: took 159.669671ms for postStartSetup
	I0917 00:32:57.938093  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:57.938130  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.958300  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.051975  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:32:58.057124  619438 fix.go:56] duration metric: took 4.817259212s for fixHost
	I0917 00:32:58.057152  619438 start.go:83] releasing machines lock for "ha-671025", held for 4.817316777s
	I0917 00:32:58.057223  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:58.076270  619438 ssh_runner.go:195] Run: cat /version.json
	I0917 00:32:58.076324  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.076348  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:32:58.076443  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.096247  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.097159  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.262989  619438 ssh_runner.go:195] Run: systemctl --version
	I0917 00:32:58.267773  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:32:58.409261  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:32:58.414211  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.423687  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:32:58.423780  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.433966  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:32:58.434000  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:32:58.434033  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:32:58.434084  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:32:58.447559  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:32:58.460424  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:32:58.460531  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:32:58.474181  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:32:58.487071  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:32:58.555422  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:32:58.624823  619438 docker.go:234] disabling docker service ...
	I0917 00:32:58.624887  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:32:58.638410  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:32:58.650440  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:32:58.717056  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:32:58.784599  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:32:58.796601  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:32:58.814550  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:32:58.814628  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.825014  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:32:58.825076  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.835600  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.845903  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.856370  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:32:58.866050  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.876375  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.886563  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.896783  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:32:58.905534  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:32:58.914324  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:58.980288  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:32:59.086529  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:32:59.086607  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:32:59.090665  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:32:59.090717  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:32:59.094291  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:32:59.129626  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:32:59.129717  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.166530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.205640  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:32:59.206928  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:32:59.224561  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:32:59.228789  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.241758  619438 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:32:59.241920  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:59.241988  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.285898  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.285921  619438 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:32:59.285968  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.321059  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.321084  619438 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:32:59.321093  619438 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:32:59.321190  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:32:59.321250  619438 ssh_runner.go:195] Run: crio config
	I0917 00:32:59.369526  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:59.369549  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:59.369567  619438 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:32:59.369587  619438 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:32:59.369753  619438 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:32:59.369775  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:32:59.369814  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:32:59.383509  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:59.383620  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:32:59.383670  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:32:59.393067  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:32:59.393127  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:32:59.402584  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:32:59.422262  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:32:59.442170  619438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:32:59.461958  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:32:59.481675  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:32:59.485564  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.497547  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:59.561107  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:32:59.583877  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:32:59.583902  619438 certs.go:194] generating shared ca certs ...
	I0917 00:32:59.583919  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:32:59.584079  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:32:59.584130  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:32:59.584138  619438 certs.go:256] generating profile certs ...
	I0917 00:32:59.584206  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:32:59.584231  619438 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6
	I0917 00:32:59.584246  619438 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:33:00.130871  619438 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 ...
	I0917 00:33:00.130908  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6: {Name:mkf467d0f9030b6e7125c3be410cb9c880d64270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131088  619438 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 ...
	I0917 00:33:00.131108  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6: {Name:mk8b3c4ad94a18f1741ce8fdbeceb16bceee6f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131220  619438 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:33:00.131404  619438 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:33:00.131601  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:00.131625  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:00.131643  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:00.131658  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:00.131673  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:00.131687  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:00.131702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:00.131714  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:00.131729  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:00.131788  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:00.131823  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:00.131830  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:00.131857  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:00.131878  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:00.131897  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:00.131942  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:00.131980  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.132001  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.132015  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.132585  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:00.165089  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:00.198657  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:00.239751  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:00.280419  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:00.317099  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:00.355265  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:00.390225  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:00.418200  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:00.443790  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:00.469778  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:00.495605  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:33:00.516723  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:00.522849  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:00.533838  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538041  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538112  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.545733  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:00.555787  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:00.566338  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570140  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570203  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.577687  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:00.587720  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:00.599252  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603349  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603456  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.611701  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:00.622604  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:00.626359  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:00.633232  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:00.640671  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:00.647926  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:00.655266  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:00.662987  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:00.670413  619438 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:33:00.670534  619438 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:33:00.670583  619438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:33:00.712724  619438 cri.go:89] found id: "dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c"
	I0917 00:33:00.712747  619438 cri.go:89] found id: "c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3"
	I0917 00:33:00.712751  619438 cri.go:89] found id: "3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da"
	I0917 00:33:00.712754  619438 cri.go:89] found id: "3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49"
	I0917 00:33:00.712757  619438 cri.go:89] found id: "feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15"
	I0917 00:33:00.712761  619438 cri.go:89] found id: ""
	I0917 00:33:00.712805  619438 ssh_runner.go:195] Run: sudo runc list -f json
	I0917 00:33:00.733477  619438 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","pid":805,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49/userdata","rootfs":"/var/lib/containers/storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","created":"2025-09-17T00:33:00.224803069Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.170354801Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7817082b8b3b4ebaac6b1c6cc40fe3e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-671025_a7817082b8b3b4ebaac6b1c6cc40fe3e/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/
storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a781708
2b8b3b4ebaac6b1c6cc40fe3e/containers/kube-vip/367d19bd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.hash":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.seen":"2025-09-17T00:32:59.669171997Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","pid":880,"status":"running","bundle":"/run/containers/
storage/overlay-containers/3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da/userdata","rootfs":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","created":"2025-09-17T00:33:00.275833142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePa
th\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.202504428Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b5ccb738eb1160dc60c2973028d04964\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-671025_b5ccb738eb1160dc60c2973028d04964/kube-apiserver/1.log","io.kuberne
tes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/containers/kube-ap
iserver/6df491f2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":f
alse}]","io.kubernetes.pod.name":"kube-apiserver-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b5ccb738eb1160dc60c2973028d04964","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b5ccb738eb1160dc60c2973028d04964","kubernetes.io/config.seen":"2025-09-17T00:32:59.669167256Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","pid":894,"status":"running","bundle":"/run/containers/storage/overlay-containers/c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3/userdata","rootfs":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a
9bffec85a2a35b5e8e008790d2da1/merged","created":"2025-09-17T00:33:00.274952825Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID"
:"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.203434002Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"74a9cbd6392d4b9acfdd053de2761cb8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-671025_74a9cbd6392d4b9acfdd053de2761cb8/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a9bffec85
a2a35b5e8e008790d2da1/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/containers/kube
-scheduler/513703c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.hash":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.seen":"2025-09-17T00:32:59.669170685Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","pid":914,"status":"running","bundle":"/run/containers/storage/overlay-contai
ners/dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c/userdata","rootfs":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","created":"2025-09-17T00:33:00.286793858Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/d
ev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.204654096Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8d1e0f98935496199c8e8278a2410d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-671025_8d1e0f98935496199c8e8278a2410d09/kube-c
ontroller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/
etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/containers/kube-controller-manager/7587fc8c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"ho
st_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.hash":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.seen":"2025-09-17T00:32:59.669169006Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.system
d.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","pid":809,"status":"running","bundle":"/run/containers/storage/overlay-containers/feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15/userdata","rootfs":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","created":"2025-09-17T00:33:00.227524758Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\
\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.156861142Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"629bf94aa
8286a4aae957269fae7c79b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-671025_629bf94aa8286a4aae957269fae7c79b/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\
":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/containers/etcd/188c438f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"629bf94aa8286a4aae957269fae7c79b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"629bf94aa8286a4aae957269fae7c79b",
"kubernetes.io/config.seen":"2025-09-17T00:32:59.669161890Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0917 00:33:00.733792  619438 cri.go:126] list returned 5 containers
	I0917 00:33:00.733811  619438 cri.go:129] container: {ID:3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 Status:running}
	I0917 00:33:00.733830  619438 cri.go:135] skipping {3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 running}: state = "running", want "paused"
	I0917 00:33:00.733846  619438 cri.go:129] container: {ID:3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da Status:running}
	I0917 00:33:00.733857  619438 cri.go:135] skipping {3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da running}: state = "running", want "paused"
	I0917 00:33:00.733867  619438 cri.go:129] container: {ID:c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 Status:running}
	I0917 00:33:00.733875  619438 cri.go:135] skipping {c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 running}: state = "running", want "paused"
	I0917 00:33:00.733884  619438 cri.go:129] container: {ID:dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c Status:running}
	I0917 00:33:00.733891  619438 cri.go:135] skipping {dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c running}: state = "running", want "paused"
	I0917 00:33:00.733906  619438 cri.go:129] container: {ID:feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 Status:running}
	I0917 00:33:00.733915  619438 cri.go:135] skipping {feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 running}: state = "running", want "paused"
	I0917 00:33:00.733967  619438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:33:00.743818  619438 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:33:00.743842  619438 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:33:00.743896  619438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:33:00.753049  619438 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:00.753478  619438 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671025" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.753570  619438 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-517646/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671025" cluster setting kubeconfig missing "ha-671025" context setting]
	I0917 00:33:00.753860  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.754368  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:33:00.754887  619438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:33:00.754902  619438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:33:00.754906  619438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:33:00.754911  619438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:33:00.754914  619438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:33:00.754984  619438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:33:00.755286  619438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:33:00.764691  619438 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:33:00.764721  619438 kubeadm.go:593] duration metric: took 20.872209ms to restartPrimaryControlPlane
	I0917 00:33:00.764732  619438 kubeadm.go:394] duration metric: took 94.344936ms to StartCluster
	I0917 00:33:00.764754  619438 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.764829  619438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.765434  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.765678  619438 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:00.765703  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:00.765712  619438 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:33:00.765954  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.768475  619438 out.go:179] * Enabled addons: 
	I0917 00:33:00.769396  619438 addons.go:514] duration metric: took 3.672053ms for enable addons: enabled=[]
	I0917 00:33:00.769427  619438 start.go:246] waiting for cluster config update ...
	I0917 00:33:00.769435  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:00.770640  619438 out.go:203] 
	I0917 00:33:00.771782  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.771882  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.773295  619438 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:33:00.774266  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:00.775272  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:00.776246  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:00.776270  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:00.776303  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:00.776369  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:00.776383  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:00.776522  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.798181  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:00.798201  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:00.798221  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:00.798259  619438 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:00.798335  619438 start.go:364] duration metric: took 52.828µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:33:00.798366  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:00.798404  619438 fix.go:54] fixHost starting: m02
	I0917 00:33:00.798630  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:00.816952  619438 fix.go:112] recreateIfNeeded on ha-671025-m02: state=Stopped err=<nil>
	W0917 00:33:00.816988  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:00.818588  619438 out.go:252] * Restarting existing docker container for "ha-671025-m02" ...
	I0917 00:33:00.818663  619438 cli_runner.go:164] Run: docker start ha-671025-m02
	I0917 00:33:01.089289  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:01.112171  619438 kic.go:430] container "ha-671025-m02" state is running.
	I0917 00:33:01.112607  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:01.134692  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:01.134992  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:01.135064  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:01.156210  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:01.156564  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:01.156582  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:01.157427  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34164->127.0.0.1:33183: read: connection reset by peer
	I0917 00:33:04.296769  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.296809  619438 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:33:04.296905  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.315073  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.315310  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.315323  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:33:04.466025  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.466110  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.484268  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.484535  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.484554  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:04.621439  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:04.621482  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:04.621501  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:04.621511  619438 provision.go:84] configureAuth start
	I0917 00:33:04.621573  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:04.640283  619438 provision.go:143] copyHostCerts
	I0917 00:33:04.640335  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640368  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:04.640383  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640480  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:04.640601  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640634  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:04.640652  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640698  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:04.640784  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640809  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:04.640818  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640852  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:04.640942  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:33:04.733693  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:04.733759  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:04.733809  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.752499  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:04.850462  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:04.850518  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:04.876387  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:04.876625  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:04.904017  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:04.904091  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:04.932067  619438 provision.go:87] duration metric: took 310.54132ms to configureAuth
	I0917 00:33:04.932114  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:04.932333  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:04.932519  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.950911  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.951173  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.951192  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:13.583717  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:13.583742  619438 machine.go:96] duration metric: took 12.448736712s to provisionDockerMachine
	I0917 00:33:13.583754  619438 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:33:13.583768  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:13.583844  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:13.583889  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.602374  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.704271  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:13.709862  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:13.709910  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:13.709921  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:13.709930  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:13.709945  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:13.710027  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:13.710128  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:13.710138  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:13.710258  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:13.726542  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:13.762021  619438 start.go:296] duration metric: took 178.248287ms for postStartSetup
	I0917 00:33:13.762146  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:13.762202  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.785807  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.885926  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:13.890781  619438 fix.go:56] duration metric: took 13.092394555s for fixHost
	I0917 00:33:13.890814  619438 start.go:83] releasing machines lock for "ha-671025-m02", held for 13.092464098s
	I0917 00:33:13.890888  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:13.912194  619438 out.go:179] * Found network options:
	I0917 00:33:13.913617  619438 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:33:13.914820  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:13.914864  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:13.914934  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:13.914975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.915050  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:13.915121  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.935804  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.936030  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:14.188511  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:14.195453  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.211117  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:14.211201  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.227642  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:14.227708  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:14.227849  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:14.227922  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:14.251293  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:14.271238  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:14.271313  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:14.288904  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:14.307961  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:14.437900  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:14.545190  619438 docker.go:234] disabling docker service ...
	I0917 00:33:14.545281  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:14.560872  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:14.573584  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:14.680197  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:14.811100  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:14.825885  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:14.847059  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:14.847127  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.859808  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:14.859899  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.871797  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.883328  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.896664  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:14.907675  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.918906  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.929358  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.941273  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:14.953043  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:14.967648  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:15.083218  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:21.777437  619438 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.694178293s)
	I0917 00:33:21.777485  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:21.777539  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:21.781615  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:21.781681  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:21.785837  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:21.828119  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:21.828217  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.874252  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.916319  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:21.917788  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:21.918929  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:21.938354  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:21.942655  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:21.956120  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:21.956460  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:21.956800  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:21.976493  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:21.976752  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:33:21.976765  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:21.976779  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:21.976919  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:21.976970  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:21.976980  619438 certs.go:256] generating profile certs ...
	I0917 00:33:21.977105  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:21.977160  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.289f7349
	I0917 00:33:21.977201  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:21.977214  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:21.977226  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:21.977238  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:21.977248  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:21.977263  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:21.977277  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:21.977292  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:21.977304  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:21.977348  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:21.977374  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:21.977384  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:21.977437  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:21.977468  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:21.977488  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:21.977537  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:21.977566  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:21.977579  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:21.977591  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:21.977641  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:21.996033  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:22.086756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:22.091430  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:22.105578  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:22.109474  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:22.123413  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:22.127015  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:22.140675  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:22.145374  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:22.160202  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:22.164648  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:22.179040  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:22.182820  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:22.197252  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:22.226621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:22.255420  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:22.284497  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:22.313100  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:22.339570  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:22.368270  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:22.395836  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:22.424911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:22.451321  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:22.479698  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:22.509017  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:22.530192  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:22.550277  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:22.570982  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:22.591763  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:22.615610  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:22.637548  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:22.660728  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:22.668525  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:22.679921  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684865  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684929  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.692513  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:22.703651  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:22.716758  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721573  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721639  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.729408  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:22.740799  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:22.754481  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759515  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759591  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.769873  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:22.780940  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:22.785123  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:22.792739  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:22.800305  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:22.808094  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:22.815985  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:22.823772  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:22.830968  619438 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:33:22.831108  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:22.831135  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:22.831174  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:22.845445  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:22.845549  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:22.845617  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:22.856831  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:22.856928  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:22.867889  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:22.888469  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:22.908498  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:22.929249  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:22.933575  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:22.945785  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.049186  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.063035  619438 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:23.063337  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.065109  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:23.066721  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.162455  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.176145  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:23.176215  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:23.176479  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185303  619438 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:33:23.185333  619438 node_ready.go:38] duration metric: took 8.819618ms for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185350  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:23.185420  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:23.197637  619438 api_server.go:72] duration metric: took 134.535244ms to wait for apiserver process to appear ...
	I0917 00:33:23.197672  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:23.197693  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:23.202879  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:23.204114  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:23.204224  619438 api_server.go:131] duration metric: took 6.534103ms to wait for apiserver health ...
	I0917 00:33:23.204244  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:23.211681  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:23.211742  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211758  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211769  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.211777  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.211783  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.211792  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.211798  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.211807  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.211816  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.211822  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.211829  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.211836  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.211844  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.211850  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.211859  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.211867  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.211875  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.211881  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.211888  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.211896  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.211902  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.211907  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.211913  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.211919  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.211928  619438 system_pods.go:74] duration metric: took 7.670911ms to wait for pod list to return data ...
	I0917 00:33:23.211942  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:23.215282  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:23.215305  619438 default_sa.go:55] duration metric: took 3.354164ms for default service account to be created ...
	I0917 00:33:23.215314  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:23.220686  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:23.220721  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220730  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220737  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.220741  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.220745  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.220750  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.220753  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.220759  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.220763  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.220768  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.220771  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.220774  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.220778  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.220782  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.220786  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.220790  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.220793  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.220796  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.220800  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.220803  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.220806  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.220808  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.220812  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.220816  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.220822  619438 system_pods.go:126] duration metric: took 5.503704ms to wait for k8s-apps to be running ...
	I0917 00:33:23.220830  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:23.220878  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:23.233344  619438 system_svc.go:56] duration metric: took 12.501522ms WaitForService to wait for kubelet
	I0917 00:33:23.233378  619438 kubeadm.go:578] duration metric: took 170.282ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:23.233426  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:23.237203  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237235  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237249  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237253  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237258  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237263  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237268  619438 node_conditions.go:105] duration metric: took 3.836923ms to run NodePressure ...
	I0917 00:33:23.237281  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:23.237310  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:23.239362  619438 out.go:203] 
	I0917 00:33:23.240662  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.240787  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.242255  619438 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:33:23.243650  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:23.244785  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:23.245985  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:23.246015  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:23.246076  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:23.246103  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:23.246111  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:23.246237  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.267677  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:23.267698  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:23.267719  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:23.267746  619438 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:23.267801  619438 start.go:364] duration metric: took 38.266µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:33:23.267818  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:23.267825  619438 fix.go:54] fixHost starting: m03
	I0917 00:33:23.268049  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.286470  619438 fix.go:112] recreateIfNeeded on ha-671025-m03: state=Stopped err=<nil>
	W0917 00:33:23.286501  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:23.288337  619438 out.go:252] * Restarting existing docker container for "ha-671025-m03" ...
	I0917 00:33:23.288444  619438 cli_runner.go:164] Run: docker start ha-671025-m03
	I0917 00:33:23.539232  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.559852  619438 kic.go:430] container "ha-671025-m03" state is running.
	I0917 00:33:23.560281  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:23.582181  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.582448  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:23.582512  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:23.603240  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:23.603508  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:23.603524  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:23.604268  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54628->127.0.0.1:33188: read: connection reset by peer
	I0917 00:33:26.756053  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.756095  619438 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:33:26.756163  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.775553  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.775816  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.775832  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:33:26.929724  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.929811  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.948952  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.949181  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.949199  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:27.097686  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:27.097724  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:27.097808  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:27.097838  619438 provision.go:84] configureAuth start
	I0917 00:33:27.097905  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:27.124607  619438 provision.go:143] copyHostCerts
	I0917 00:33:27.124661  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124704  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:27.124712  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124796  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:27.124902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124927  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:27.124938  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124978  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:27.125071  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125093  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:27.125097  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125123  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:27.125202  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:33:27.491028  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:27.491103  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:27.491153  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.510894  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:27.621913  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:27.621991  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:27.659332  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:27.659436  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:27.694265  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:27.694331  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:27.729012  619438 provision.go:87] duration metric: took 631.150589ms to configureAuth
	I0917 00:33:27.729044  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:27.729332  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:27.729498  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.752375  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:27.752667  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:27.752694  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:28.163571  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:28.163606  619438 machine.go:96] duration metric: took 4.581141061s to provisionDockerMachine
	I0917 00:33:28.163625  619438 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:33:28.163636  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:28.163694  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:28.163736  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.183221  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.282370  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:28.286033  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:28.286069  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:28.286080  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:28.286089  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:28.286103  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:28.286167  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:28.286260  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:28.286273  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:28.286385  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:28.296210  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:28.323607  619438 start.go:296] duration metric: took 159.96344ms for postStartSetup
	I0917 00:33:28.323744  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:28.323801  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.341948  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.437100  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:28.442217  619438 fix.go:56] duration metric: took 5.174381535s for fixHost
	I0917 00:33:28.442251  619438 start.go:83] releasing machines lock for "ha-671025-m03", held for 5.17444003s
	I0917 00:33:28.442339  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:28.462490  619438 out.go:179] * Found network options:
	I0917 00:33:28.463995  619438 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:33:28.465339  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465379  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465437  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465456  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:28.465540  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:28.465604  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.465608  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:28.465666  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.484618  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.484954  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.729938  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:28.735367  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.746253  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:28.746345  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.757317  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:28.757344  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:28.757382  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:28.757457  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:28.772308  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:28.784900  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:28.784967  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:28.800003  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:28.812730  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:28.927855  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:29.059441  619438 docker.go:234] disabling docker service ...
	I0917 00:33:29.059519  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:29.078537  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:29.093278  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:29.210953  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:29.324946  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:29.337107  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:29.355136  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:29.355186  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.366142  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:29.366211  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.378355  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.389105  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.399699  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:29.409712  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.420697  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.430508  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.440921  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:29.450466  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:29.459577  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:29.574875  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:29.816990  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:29.817095  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:29.821723  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:29.821780  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:29.825613  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:29.861449  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:29.861530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.917974  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.959407  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:29.960768  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:29.962037  619438 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:33:29.963347  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:29.990529  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:29.995062  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.007594  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:30.007810  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:30.008007  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:30.028172  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:30.028488  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:33:30.028502  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:30.028518  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:30.028667  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:30.028724  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:30.028738  619438 certs.go:256] generating profile certs ...
	I0917 00:33:30.028835  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:30.028918  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:33:30.028969  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:30.028985  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:30.029006  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:30.029022  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:30.029039  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:30.029053  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:30.029066  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:30.029085  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:30.029109  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:30.029181  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:30.029228  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:30.029241  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:30.029285  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:30.029320  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:30.029350  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:30.029418  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:30.029458  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.029480  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.029497  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.029570  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:30.048859  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:30.137756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:30.142385  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:30.157058  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:30.161473  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:30.176759  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:30.180509  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:30.193674  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:30.197197  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:30.210232  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:30.214138  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:30.227500  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:30.231351  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:30.244274  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:30.271911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:30.299112  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:30.326476  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:30.352993  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:30.380621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:30.406324  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:30.432139  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:30.458308  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:30.483817  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:30.509827  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:30.537659  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:30.557593  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:30.577579  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:30.597023  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:30.617353  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:30.636531  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:30.656268  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:30.676462  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:30.682486  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:30.693023  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696932  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696986  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.704184  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:30.714256  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:30.725254  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.728941  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.729013  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.736673  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:30.746358  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:30.757231  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761269  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761351  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.768689  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:30.779054  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:30.783069  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:30.790436  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:30.797491  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:30.804684  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:30.811602  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:30.818603  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:30.825614  619438 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:33:30.825731  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:30.825755  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:30.825793  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:30.839517  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:30.839587  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:30.839637  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:30.849197  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:30.849283  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:30.859805  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:30.879168  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:30.898461  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:30.918131  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:30.922054  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.934606  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.047135  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.060828  619438 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:31.061141  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.063169  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:31.064429  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.179306  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.194472  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:31.194609  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:31.194890  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198458  619438 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:33:31.198488  619438 node_ready.go:38] duration metric: took 3.579476ms for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198503  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:31.198550  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:31.212138  619438 api_server.go:72] duration metric: took 151.254038ms to wait for apiserver process to appear ...
	I0917 00:33:31.212172  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:31.212199  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:31.217814  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:31.218774  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:31.218795  619438 api_server.go:131] duration metric: took 6.616763ms to wait for apiserver health ...
	I0917 00:33:31.218803  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:31.225098  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:31.225134  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225141  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225149  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.225155  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.225163  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.225168  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.225177  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.225185  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.225190  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.225199  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.225205  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.225209  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.225213  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.225219  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.225225  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.225228  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.225231  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.225235  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.225237  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.225242  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.225247  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.225250  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.225253  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.225255  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.225261  619438 system_pods.go:74] duration metric: took 6.452715ms to wait for pod list to return data ...
	I0917 00:33:31.225280  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:31.228376  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:31.228411  619438 default_sa.go:55] duration metric: took 3.119992ms for default service account to be created ...
	I0917 00:33:31.228422  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:31.233445  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:31.233478  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233487  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233491  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.233495  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.233501  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.233504  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.233508  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.233511  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.233517  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.233523  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.233529  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.233535  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.233540  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.233548  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.233555  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.233559  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.233566  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.233570  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.233576  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.233581  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.233587  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.233590  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.233596  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.233599  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.233605  619438 system_pods.go:126] duration metric: took 5.178303ms to wait for k8s-apps to be running ...
	I0917 00:33:31.233615  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:31.233661  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:31.246667  619438 system_svc.go:56] duration metric: took 13.0386ms WaitForService to wait for kubelet
	I0917 00:33:31.246701  619438 kubeadm.go:578] duration metric: took 185.824043ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:31.246730  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:31.250636  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250665  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250679  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250684  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250690  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250694  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250700  619438 node_conditions.go:105] duration metric: took 3.96358ms to run NodePressure ...
	I0917 00:33:31.250716  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:31.250743  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:31.253191  619438 out.go:203] 
	I0917 00:33:31.255560  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.255716  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.257849  619438 out.go:179] * Starting "ha-671025-m04" worker node in "ha-671025" cluster
	I0917 00:33:31.259401  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:31.260716  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:31.262230  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:31.262264  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:31.262330  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:31.262386  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:31.262432  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:31.262581  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.285684  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:31.285706  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:31.285722  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:31.285751  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:31.285824  619438 start.go:364] duration metric: took 55.532µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:33:31.285843  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:31.285851  619438 fix.go:54] fixHost starting: m04
	I0917 00:33:31.286063  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.305028  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:33:31.305061  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:31.307579  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:33:31.307671  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:33:31.575879  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.595646  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:33:31.596093  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:33:31.616747  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.617092  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:31.617170  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:33:31.636573  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:31.636791  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0917 00:33:31.636802  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:31.637630  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36226->127.0.0.1:33193: read: connection reset by peer
	I0917 00:33:34.638709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:37.640910  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:40.643532  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:43.644441  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:46.646832  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:49.647727  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:52.649735  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:55.650690  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:58.651030  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:01.651344  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:04.652841  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:07.653174  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:10.655161  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:13.656284  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:16.658064  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:19.658720  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:22.660831  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:25.661743  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:28.662460  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:31.663366  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:34.664358  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:37.666715  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:40.668752  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:43.669135  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:46.670730  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:49.671672  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:52.673038  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:55.674872  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:58.675353  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:01.676728  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:04.677624  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:07.680078  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:10.681718  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:13.682700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:16.684701  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:19.686235  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:22.687651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:25.689778  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:28.690485  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:31.691549  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:34.692838  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:37.695306  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:40.697845  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:43.698429  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:46.700789  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:49.701639  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:52.702370  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:55.704673  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:58.705496  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:01.706733  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:04.708175  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:07.709697  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:10.712190  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:13.713347  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:16.715721  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:19.716893  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:22.718572  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:25.720700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:28.721777  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:31.722479  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:36:31.722518  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:36:31.722607  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.744520  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.744620  619438 machine.go:96] duration metric: took 3m0.127509973s to provisionDockerMachine
	I0917 00:36:31.744723  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:36:31.744770  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.764601  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.764736  619438 retry.go:31] will retry after 288.945807ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.054420  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.074595  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.074728  619438 retry.go:31] will retry after 272.369407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.348309  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.368462  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.368608  619438 retry.go:31] will retry after 744.516266ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.113868  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.133032  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.133163  619438 retry.go:31] will retry after 492.951246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.626619  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.647357  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:33.647505  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:33.647528  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.647587  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:36:33.647631  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.666215  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.666338  619438 retry.go:31] will retry after 272.675779ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.939657  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.958470  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.958588  619438 retry.go:31] will retry after 525.446207ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:34.484331  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:34.504346  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:34.504492  619438 retry.go:31] will retry after 588.594219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.093370  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:35.116893  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:35.117042  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117086  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117113  619438 fix.go:56] duration metric: took 3m3.831261756s for fixHost
	I0917 00:36:35.117126  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.831291336s
	W0917 00:36:35.117142  619438 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117240  619438 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117254  619438 start.go:729] Will try again in 5 seconds ...
	I0917 00:36:40.118524  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:36:40.118656  619438 start.go:364] duration metric: took 88.188µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:36:40.118689  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:36:40.118698  619438 fix.go:54] fixHost starting: m04
	I0917 00:36:40.119106  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.139538  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:36:40.139579  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:36:40.141549  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:36:40.141624  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:36:40.412862  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.433322  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:36:40.433799  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:36:40.453513  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:36:40.453934  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:36:40.454059  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:36:40.473978  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:40.474315  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0917 00:36:40.474331  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:36:40.475099  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33606->127.0.0.1:33198: read: connection reset by peer
	I0917 00:36:43.475724  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:46.476660  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:49.478345  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:52.479547  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:55.482132  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:58.483337  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:01.484607  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:04.485839  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:07.487714  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:10.489661  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:13.490227  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:16.492090  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:19.492645  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:22.493651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:25.495677  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:28.496275  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:31.497224  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:34.497736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:37.499709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:40.502218  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:43.502692  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:46.504930  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:49.506113  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:52.506643  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:55.507569  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:58.507989  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:01.508674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:04.509297  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:07.511674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:10.512110  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:13.512683  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:16.515058  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:19.516277  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:22.517225  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:25.519308  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:28.519717  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:31.520615  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:34.522114  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:37.523670  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:40.526331  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:43.527374  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:46.529741  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:49.531301  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:52.532585  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:55.533793  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:58.534231  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:01.534621  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:04.536103  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:07.538458  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:10.540484  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:13.541711  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:16.543992  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:19.545340  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:22.546576  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:25.548676  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:28.549734  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:31.550736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:34.551691  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:37.553774  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:40.555606  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:39:40.555645  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:39:40.555731  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.576194  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.576295  619438 machine.go:96] duration metric: took 3m0.122321612s to provisionDockerMachine
	I0917 00:39:40.576379  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:39:40.576440  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.595844  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.595977  619438 retry.go:31] will retry after 334.138339ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:40.931319  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.951370  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.951504  619438 retry.go:31] will retry after 347.147392ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.299070  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.319717  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.319850  619438 retry.go:31] will retry after 612.672267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.933618  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.954663  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:41.954778  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:41.954797  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.954845  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:39:41.954878  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.975511  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.975621  619438 retry.go:31] will retry after 279.089961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.255093  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.275630  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.275759  619438 retry.go:31] will retry after 427.799265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.704460  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.723085  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.723291  619438 retry.go:31] will retry after 748.226264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.472625  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:43.493097  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:43.493238  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.493260  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.493279  619438 fix.go:56] duration metric: took 3m3.3745821s for fixHost
	I0917 00:39:43.493294  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.374622198s
	W0917 00:39:43.493451  619438 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-671025" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.495244  619438 out.go:203] 
	W0917 00:39:43.496536  619438 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.496558  619438 out.go:285] * 
	W0917 00:39:43.498254  619438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:39:43.499426  619438 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 00:33:14 ha-671025 crio[565]: time="2025-09-17 00:33:14.250668570Z" level=info msg="Started container" PID=1371 containerID=0a6ec806f09b0ec6cd3c05e4e3ae47a201470e8dd91c163a0a50e778942c1fdf description=kube-system/coredns-66bc5c9577-vfj56/coredns id=e249fce6-f4cd-4113-83e0-50d04adcc10f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b722ecf2f3e80164bf38e495945b2f9de2da062098248c531372f1254b04027
	Sep 17 00:33:14 ha-671025 crio[565]: time="2025-09-17 00:33:14.254529988Z" level=info msg="Started container" PID=1357 containerID=0f6f22dfaf3f5c42ab834fbdacc268222b9381892b372e6c6777b8cdc48ae94d description=kube-system/kube-proxy-f58dt/kube-proxy id=a0f2eb2e-8af2-4dfd-a58a-1737b5f99d21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86370afe3da8daa2b358bfa93e3418e66144d35d035fed0a638a50924fa59408
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.753340587Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758517303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758557932Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758575572Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.764982577Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.765047831Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.765068425Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769374951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769549150Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769575818Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.773978219Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.774011909Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.807516826Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed976c02-d574-4c82-bfc5-c9beb8325877 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.807738230Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ed976c02-d574-4c82-bfc5-c9beb8325877 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.808425117Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f7b84450-4a24-4619-b6df-a4e028fc709d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.808644322Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f7b84450-4a24-4619-b6df-a4e028fc709d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.809516747Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f7135108-062d-4210-941f-2121b4150437 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.809630183Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.824058373Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e4482785ad19323f369936fbb4daa43031f78405e411d03a635704ce0b9bfa42/merged/etc/passwd: no such file or directory"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.824101095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e4482785ad19323f369936fbb4daa43031f78405e411d03a635704ce0b9bfa42/merged/etc/group: no such file or directory"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.883592079Z" level=info msg="Created container ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9: kube-system/storage-provisioner/storage-provisioner" id=f7135108-062d-4210-941f-2121b4150437 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.884330281Z" level=info msg="Starting container: ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9" id=e3034afa-a009-4659-9e70-4826d4a036d3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.892093157Z" level=info msg="Started container" PID=1755 containerID=ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9 description=kube-system/storage-provisioner/storage-provisioner id=e3034afa-a009-4659-9e70-4826d4a036d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84705f66b6f00fabea4a34fd2340cb783d9fd23e696a1d70dfe64392537e0e17
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecf22eec47271       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   5 minutes ago       Running             storage-provisioner       3                   84705f66b6f00       storage-provisioner
	0a6ec806f09b0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   3b722ecf2f3e8       coredns-66bc5c9577-vfj56
	911039394b566       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   0d31993e30b9d       busybox-7b57f96db7-wj4r5
	0f6f22dfaf3f5       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   6 minutes ago       Running             kube-proxy                1                   86370afe3da8d       kube-proxy-f58dt
	d8a3a53722ee7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               1                   573be4d17bc4c       kindnet-9zvhz
	79c32235f9c36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       2                   84705f66b6f00       storage-provisioner
	1151cd93da2ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   4c29d74d630f3       coredns-66bc5c9577-mqh24
	dd21b88addb23       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   6 minutes ago       Running             kube-controller-manager   1                   17b3a59f2d7b6       kube-controller-manager-ha-671025
	c7b95b9bb5f9d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   6 minutes ago       Running             kube-scheduler            1                   0d6a7ac1856cb       kube-scheduler-ha-671025
	3fa5cc179a477       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   6 minutes ago       Running             kube-apiserver            1                   c0bb4371ed6c8       kube-apiserver-ha-671025
	3a99a51aacd42       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   6 minutes ago       Running             kube-vip                  0                   aca3020b8c9d0       kube-vip-ha-671025
	feb54ecd21790       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 minutes ago       Running             etcd                      1                   ff786868f6409       etcd-ha-671025
	
	
	==> coredns [0a6ec806f09b0ec6cd3c05e4e3ae47a201470e8dd91c163a0a50e778942c1fdf] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41081 - 22204 "HINFO IN 3438997292128027948.7850884943177890662. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020285532s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [1151cd93da2add1289085967f6fd11dca725fe05835ee8882364ce8ef4d5c1d9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34114 - 63412 "HINFO IN 8932016049737155266.1565975528977438817. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04450606s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:39:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ed2fe35b45d401da396432da19b49e7
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           8m22s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  Starting                 6m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m45s (x8 over 6m45s)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s (x8 over 6m45s)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s (x8 over 6m45s)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:39:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 34a83f19fcce42489e31c52ddb1f71d8
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m18s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  NodeHasNoDiskPressure    8m28s (x8 over 8m28s)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s (x8 over 8m28s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m28s (x8 over 8m28s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m22s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m43s (x8 over 6m43s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x8 over 6m43s)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x8 over 6m43s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	Name:               ha-671025-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:39:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:38:13 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:38:13 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:38:13 +0000   Wed, 17 Sep 2025 00:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:38:13 +0000   Wed, 17 Sep 2025 00:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-671025-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f2d39b7ecf04b12adfb34303f5413b3
	  System UUID:                ca019c4e-efee-45a1-854b-8ad90ea7fdf4
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dk9cf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 etcd-ha-671025-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-9w6f7                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-671025-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-671025-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-q96zd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-671025-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-671025-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode           8m22s                  node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	  Normal  Starting                 6m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m21s)  kubelet          Node ha-671025-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m21s)  kubelet          Node ha-671025-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x8 over 6m21s)  kubelet          Node ha-671025-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m15s                  node-controller  Node ha-671025-m03 event: Registered Node ha-671025-m03 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15] <==
	{"level":"warn","ts":"2025-09-17T00:33:22.931202Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:22.953772Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.053274Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.085456Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.153568Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.183346Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.184418Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.206048Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.213375Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.216456Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.234877Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.253514Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.352991Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.383661Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-17T00:33:23.417092Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"58f1161d61ce118","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:33:23.417146Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"58f1161d61ce118","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-09-17T00:33:24.690264Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"58f1161d61ce118","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-17T00:33:24.690347Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:33:24.690387Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:33:24.695736Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"58f1161d61ce118","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-17T00:33:24.695780Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:33:24.727184Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:33:24.731280Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:33:25.373568Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"58f1161d61ce118","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:33:25.373669Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"58f1161d61ce118","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:39:45 up  3:22,  0 users,  load average: 0.33, 0.63, 3.23
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d8a3a53722ee71de725c2794a050878da7894fbc523bb6bac8efe7e38865e48e] <==
	I0917 00:39:04.762070       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:14.752701       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:14.752735       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:14.752907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:14.752917       1 main.go:301] handling current node
	I0917 00:39:14.752930       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:14.752934       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:24.755944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:24.755981       1 main.go:301] handling current node
	I0917 00:39:24.755998       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:24.756003       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:24.756183       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:24.756192       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:34.760510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:34.760554       1 main.go:301] handling current node
	I0917 00:39:34.760573       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:34.760579       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:34.760773       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:34.760789       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:44.756468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:44.756507       1 main.go:301] handling current node
	I0917 00:39:44.756526       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:44.756532       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:44.756690       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:44.756700       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da] <==
	I0917 00:33:12.199447       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 00:33:12.203011       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:33:12.204560       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:33:12.215378       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0917 00:33:12.225713       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0917 00:33:12.225748       1 policy_source.go:240] refreshing policies
	E0917 00:33:12.257458       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 00:33:12.275512       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:33:13.102620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 00:33:13.467644       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0917 00:33:13.469377       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:33:13.475334       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 00:33:13.710304       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:33:15.400126       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:33:15.451962       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:33:15.550108       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:34:30.180357       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:34:36.295135       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:35:58.087614       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:36:04.861775       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:37:09.469711       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:37:20.231944       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:38:10.023844       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:38:42.747905       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:39:27.376187       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c] <==
	I0917 00:33:15.046416       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0917 00:33:15.046458       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:33:15.047562       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:33:15.047701       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 00:33:15.047729       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:33:15.047742       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 00:33:15.048114       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0917 00:33:15.050103       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:33:15.050156       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:33:15.050198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0917 00:33:15.051603       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:33:15.052580       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0917 00:33:15.052596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:33:15.052656       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0917 00:33:15.052705       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0917 00:33:15.052712       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0917 00:33:15.052716       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0917 00:33:15.072139       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:33:15.074323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:33:15.079457       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:33:15.079609       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:33:15.079783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m02"
	I0917 00:33:15.079806       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025"
	I0917 00:33:15.079783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m03"
	I0917 00:33:15.079891       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [0f6f22dfaf3f5c42ab834fbdacc268222b9381892b372e6c6777b8cdc48ae94d] <==
	I0917 00:33:14.310969       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:33:14.385159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:33:14.485410       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:33:14.485454       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:33:14.485579       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:33:14.505543       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:33:14.505612       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:33:14.510944       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:33:14.511517       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:33:14.511559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:33:14.512935       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:33:14.512967       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:33:14.513038       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:33:14.513032       1 config.go:200] "Starting service config controller"
	I0917 00:33:14.513056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:33:14.513059       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:33:14.513068       1 config.go:309] "Starting node config controller"
	I0917 00:33:14.513103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:33:14.513111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:33:14.613338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:33:14.613363       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:33:14.613385       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3] <==
	I0917 00:33:01.038603       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:33:11.582258       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0917 00:33:11.582299       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:33:11.582308       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:33:12.169895       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:33:12.169942       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:33:12.174415       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:33:12.174635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:33:12.174667       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:33:12.174692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:33:12.274752       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:37:39 ha-671025 kubelet[719]: E0917 00:37:39.713641     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069459713288953  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:37:49 ha-671025 kubelet[719]: E0917 00:37:49.715228     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069469714967932  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:37:49 ha-671025 kubelet[719]: E0917 00:37:49.715267     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069469714967932  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:37:59 ha-671025 kubelet[719]: E0917 00:37:59.717124     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069479716902039  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:37:59 ha-671025 kubelet[719]: E0917 00:37:59.717155     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069479716902039  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:09 ha-671025 kubelet[719]: E0917 00:38:09.719199     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069489718952365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:09 ha-671025 kubelet[719]: E0917 00:38:09.719231     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069489718952365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:19 ha-671025 kubelet[719]: E0917 00:38:19.720791     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069499720508720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:19 ha-671025 kubelet[719]: E0917 00:38:19.720832     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069499720508720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:29 ha-671025 kubelet[719]: E0917 00:38:29.722482     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069509722189753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:29 ha-671025 kubelet[719]: E0917 00:38:29.722526     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069509722189753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:39 ha-671025 kubelet[719]: E0917 00:38:39.724772     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069519724406774  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:39 ha-671025 kubelet[719]: E0917 00:38:39.724820     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069519724406774  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:49 ha-671025 kubelet[719]: E0917 00:38:49.726218     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069529725971912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:49 ha-671025 kubelet[719]: E0917 00:38:49.726259     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069529725971912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:59 ha-671025 kubelet[719]: E0917 00:38:59.727787     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069539727493186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:59 ha-671025 kubelet[719]: E0917 00:38:59.727827     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069539727493186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:09 ha-671025 kubelet[719]: E0917 00:39:09.729035     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069549728835025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:09 ha-671025 kubelet[719]: E0917 00:39:09.729066     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069549728835025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:19 ha-671025 kubelet[719]: E0917 00:39:19.730347     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069559730086423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:19 ha-671025 kubelet[719]: E0917 00:39:19.730386     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069559730086423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:29 ha-671025 kubelet[719]: E0917 00:39:29.731647     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069569731379538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:29 ha-671025 kubelet[719]: E0917 00:39:29.731688     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069569731379538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:39 ha-671025 kubelet[719]: E0917 00:39:39.732899     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069579732705681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:39 ha-671025 kubelet[719]: E0917 00:39:39.732940     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069579732705681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (461.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 node delete m03 --alsologtostderr -v 5: (10.713479148s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (550.005246ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:39:56.574341  629436 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:39:56.574524  629436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:39:56.574538  629436 out.go:374] Setting ErrFile to fd 2...
	I0917 00:39:56.574544  629436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:39:56.574780  629436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:39:56.574996  629436 out.go:368] Setting JSON to false
	I0917 00:39:56.575026  629436 mustload.go:65] Loading cluster: ha-671025
	I0917 00:39:56.575151  629436 notify.go:220] Checking for updates...
	I0917 00:39:56.575524  629436 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:39:56.575562  629436 status.go:174] checking status of ha-671025 ...
	I0917 00:39:56.576102  629436 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:39:56.596925  629436 status.go:371] ha-671025 host status = "Running" (err=<nil>)
	I0917 00:39:56.596951  629436 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:39:56.597256  629436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:39:56.616081  629436 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:39:56.616432  629436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:39:56.616482  629436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:39:56.636847  629436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:39:56.734433  629436 ssh_runner.go:195] Run: systemctl --version
	I0917 00:39:56.740126  629436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:39:56.752723  629436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:39:56.813187  629436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 00:39:56.802055071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:39:56.813826  629436 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:39:56.813858  629436 api_server.go:166] Checking apiserver status ...
	I0917 00:39:56.813907  629436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:39:56.826549  629436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/880/cgroup
	W0917 00:39:56.837709  629436 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/880/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:39:56.837760  629436 ssh_runner.go:195] Run: ls
	I0917 00:39:56.841918  629436 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:39:56.846234  629436 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:39:56.846329  629436 status.go:463] ha-671025 apiserver status = Running (err=<nil>)
	I0917 00:39:56.846346  629436 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:39:56.846364  629436 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:39:56.846648  629436 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:39:56.866043  629436 status.go:371] ha-671025-m02 host status = "Running" (err=<nil>)
	I0917 00:39:56.866069  629436 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:39:56.866315  629436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:39:56.888526  629436 host.go:66] Checking if "ha-671025-m02" exists ...
	I0917 00:39:56.888797  629436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:39:56.888834  629436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:39:56.907785  629436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:39:57.003056  629436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:39:57.016132  629436 kubeconfig.go:125] found "ha-671025" server: "https://192.168.49.254:8443"
	I0917 00:39:57.016162  629436 api_server.go:166] Checking apiserver status ...
	I0917 00:39:57.016195  629436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:39:57.029296  629436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/370/cgroup
	W0917 00:39:57.041309  629436 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:39:57.041434  629436 ssh_runner.go:195] Run: ls
	I0917 00:39:57.045452  629436 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:39:57.051459  629436 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:39:57.051488  629436 status.go:463] ha-671025-m02 apiserver status = Running (err=<nil>)
	I0917 00:39:57.051497  629436 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:39:57.051525  629436 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:39:57.051842  629436 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:39:57.070643  629436 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:39:57.070668  629436 status.go:384] host is not running, skipping remaining checks
	I0917 00:39:57.070675  629436 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 619633,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:32:53.286176868Z",
	            "FinishedAt": "2025-09-17T00:32:52.645586403Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e88ab0b1cbcc741c291833bfdeaa68e46e3b5db9345dc0aa90d473d7f1955a0",
	            "SandboxKey": "/var/run/docker/netns/3e88ab0b1cbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:78:32:58:80:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "62110bd5e439ab2c08160ae7846f5c9267265e2e870f01c3985d76fb403512f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.245273371s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt                                                            │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ node    │ ha-671025 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node start m02 --alsologtostderr -v 5                                                                                     │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:31 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ stop    │ ha-671025 stop --alsologtostderr -v 5                                                                                               │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │ 17 Sep 25 00:32 UTC │
	│ start   │ ha-671025 start --wait true --alsologtostderr -v 5                                                                                  │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:39 UTC │                     │
	│ node    │ ha-671025 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:39 UTC │ 17 Sep 25 00:39 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:32:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:32:53.048533  619438 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:32:53.048790  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.048798  619438 out.go:374] Setting ErrFile to fd 2...
	I0917 00:32:53.048801  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.049018  619438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:32:53.049513  619438 out.go:368] Setting JSON to false
	I0917 00:32:53.050516  619438 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11716,"bootTime":1758057457,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:32:53.050646  619438 start.go:140] virtualization: kvm guest
	I0917 00:32:53.052823  619438 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:32:53.054178  619438 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:32:53.054271  619438 notify.go:220] Checking for updates...
	I0917 00:32:53.056434  619438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:32:53.057686  619438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:32:53.058908  619438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:32:53.060062  619438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:32:53.061204  619438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:32:53.062799  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:53.062904  619438 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:32:53.089453  619438 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:32:53.089539  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.148341  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.138207862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.148496  619438 docker.go:318] overlay module found
	I0917 00:32:53.150179  619438 out.go:179] * Using the docker driver based on existing profile
	I0917 00:32:53.151230  619438 start.go:304] selected driver: docker
	I0917 00:32:53.151250  619438 start.go:918] validating driver "docker" against &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.151427  619438 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:32:53.151523  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.207764  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.197259177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.208608  619438 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:32:53.208644  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:53.208723  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:53.208799  619438 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.210881  619438 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:32:53.212367  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:32:53.213541  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:32:53.214652  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:53.214718  619438 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:32:53.214729  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:32:53.214774  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:32:53.214807  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:32:53.214815  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:32:53.214955  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.239640  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:32:53.239670  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:32:53.239694  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:32:53.239727  619438 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:32:53.239821  619438 start.go:364] duration metric: took 66.466µs to acquireMachinesLock for "ha-671025"
	I0917 00:32:53.239847  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:32:53.239857  619438 fix.go:54] fixHost starting: 
	I0917 00:32:53.240183  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.258645  619438 fix.go:112] recreateIfNeeded on ha-671025: state=Stopped err=<nil>
	W0917 00:32:53.258676  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:32:53.260365  619438 out.go:252] * Restarting existing docker container for "ha-671025" ...
	I0917 00:32:53.260462  619438 cli_runner.go:164] Run: docker start ha-671025
	I0917 00:32:53.507970  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.529432  619438 kic.go:430] container "ha-671025" state is running.
	I0917 00:32:53.530679  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:53.550608  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.550906  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:32:53.551014  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:53.571235  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:53.571518  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:53.571532  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:32:53.572179  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48548->127.0.0.1:33178: read: connection reset by peer
	I0917 00:32:56.710627  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.710663  619438 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:32:56.710724  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.729879  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.730123  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.730136  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:32:56.882161  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.882256  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.901113  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.901437  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.901465  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:32:57.039832  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:32:57.039868  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:32:57.039923  619438 ubuntu.go:190] setting up certificates
	I0917 00:32:57.039945  619438 provision.go:84] configureAuth start
	I0917 00:32:57.040038  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:57.059654  619438 provision.go:143] copyHostCerts
	I0917 00:32:57.059702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059734  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:32:57.059744  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059817  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:32:57.059920  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059938  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:32:57.059953  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059984  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:32:57.060042  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060059  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:32:57.060063  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060107  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:32:57.060165  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:32:57.261590  619438 provision.go:177] copyRemoteCerts
	I0917 00:32:57.261669  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:32:57.261706  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.282218  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.380298  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:32:57.380375  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:32:57.406100  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:32:57.406164  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:32:57.431902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:32:57.431973  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:32:57.458627  619438 provision.go:87] duration metric: took 418.658957ms to configureAuth
	I0917 00:32:57.458662  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:32:57.458871  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:57.458975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.477933  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:57.478176  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:57.478194  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:32:57.778279  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:32:57.778306  619438 machine.go:96] duration metric: took 4.227377039s to provisionDockerMachine
	I0917 00:32:57.778321  619438 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:32:57.778335  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:32:57.778405  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:32:57.778457  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.799370  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.898480  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:32:57.902232  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:32:57.902263  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:32:57.902270  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:32:57.902278  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:32:57.902290  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:32:57.902356  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:32:57.902449  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:32:57.902461  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:32:57.902551  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:32:57.912046  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:32:57.938010  619438 start.go:296] duration metric: took 159.669671ms for postStartSetup
	I0917 00:32:57.938093  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:57.938130  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.958300  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.051975  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:32:58.057124  619438 fix.go:56] duration metric: took 4.817259212s for fixHost
	I0917 00:32:58.057152  619438 start.go:83] releasing machines lock for "ha-671025", held for 4.817316777s
	I0917 00:32:58.057223  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:58.076270  619438 ssh_runner.go:195] Run: cat /version.json
	I0917 00:32:58.076324  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.076348  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:32:58.076443  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.096247  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.097159  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.262989  619438 ssh_runner.go:195] Run: systemctl --version
	I0917 00:32:58.267773  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:32:58.409261  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:32:58.414211  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.423687  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:32:58.423780  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.433966  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:32:58.434000  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:32:58.434033  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:32:58.434084  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:32:58.447559  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:32:58.460424  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:32:58.460531  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:32:58.474181  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:32:58.487071  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:32:58.555422  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:32:58.624823  619438 docker.go:234] disabling docker service ...
	I0917 00:32:58.624887  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:32:58.638410  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:32:58.650440  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:32:58.717056  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:32:58.784599  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:32:58.796601  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:32:58.814550  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:32:58.814628  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.825014  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:32:58.825076  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.835600  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.845903  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.856370  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:32:58.866050  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.876375  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.886563  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.896783  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:32:58.905534  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:32:58.914324  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:58.980288  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:32:59.086529  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:32:59.086607  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:32:59.090665  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:32:59.090717  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:32:59.094291  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:32:59.129626  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:32:59.129717  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.166530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.205640  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:32:59.206928  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:32:59.224561  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:32:59.228789  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.241758  619438 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:32:59.241920  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:59.241988  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.285898  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.285921  619438 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:32:59.285968  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.321059  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.321084  619438 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:32:59.321093  619438 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:32:59.321190  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:32:59.321250  619438 ssh_runner.go:195] Run: crio config
	I0917 00:32:59.369526  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:59.369549  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:59.369567  619438 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:32:59.369587  619438 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:32:59.369753  619438 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:32:59.369775  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:32:59.369814  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:32:59.383509  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:59.383620  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:32:59.383670  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:32:59.393067  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:32:59.393127  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:32:59.402584  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:32:59.422262  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:32:59.442170  619438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:32:59.461958  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:32:59.481675  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:32:59.485564  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.497547  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:59.561107  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:32:59.583877  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:32:59.583902  619438 certs.go:194] generating shared ca certs ...
	I0917 00:32:59.583919  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:32:59.584079  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:32:59.584130  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:32:59.584138  619438 certs.go:256] generating profile certs ...
	I0917 00:32:59.584206  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:32:59.584231  619438 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6
	I0917 00:32:59.584246  619438 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:33:00.130871  619438 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 ...
	I0917 00:33:00.130908  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6: {Name:mkf467d0f9030b6e7125c3be410cb9c880d64270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131088  619438 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 ...
	I0917 00:33:00.131108  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6: {Name:mk8b3c4ad94a18f1741ce8fdbeceb16bceee6f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131220  619438 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:33:00.131404  619438 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:33:00.131601  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:00.131625  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:00.131643  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:00.131658  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:00.131673  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:00.131687  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:00.131702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:00.131714  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:00.131729  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:00.131788  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:00.131823  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:00.131830  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:00.131857  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:00.131878  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:00.131897  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:00.131942  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:00.131980  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.132001  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.132015  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.132585  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:00.165089  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:00.198657  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:00.239751  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:00.280419  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:00.317099  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:00.355265  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:00.390225  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:00.418200  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:00.443790  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:00.469778  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:00.495605  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:33:00.516723  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:00.522849  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:00.533838  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538041  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538112  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.545733  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:00.555787  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:00.566338  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570140  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570203  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.577687  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:00.587720  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:00.599252  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603349  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603456  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.611701  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:00.622604  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:00.626359  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:00.633232  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:00.640671  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:00.647926  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:00.655266  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:00.662987  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:00.670413  619438 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:33:00.670534  619438 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:33:00.670583  619438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:33:00.712724  619438 cri.go:89] found id: "dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c"
	I0917 00:33:00.712747  619438 cri.go:89] found id: "c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3"
	I0917 00:33:00.712751  619438 cri.go:89] found id: "3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da"
	I0917 00:33:00.712754  619438 cri.go:89] found id: "3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49"
	I0917 00:33:00.712757  619438 cri.go:89] found id: "feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15"
	I0917 00:33:00.712761  619438 cri.go:89] found id: ""
	I0917 00:33:00.712805  619438 ssh_runner.go:195] Run: sudo runc list -f json
	I0917 00:33:00.733477  619438 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","pid":805,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49/userdata","rootfs":"/var/lib/containers/storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","created":"2025-09-17T00:33:00.224803069Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.170354801Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7817082b8b3b4ebaac6b1c6cc40fe3e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-671025_a7817082b8b3b4ebaac6b1c6cc40fe3e/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/
storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a781708
2b8b3b4ebaac6b1c6cc40fe3e/containers/kube-vip/367d19bd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.hash":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.seen":"2025-09-17T00:32:59.669171997Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","pid":880,"status":"running","bundle":"/run/containers/
storage/overlay-containers/3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da/userdata","rootfs":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","created":"2025-09-17T00:33:00.275833142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePa
th\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.202504428Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b5ccb738eb1160dc60c2973028d04964\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-671025_b5ccb738eb1160dc60c2973028d04964/kube-apiserver/1.log","io.kuberne
tes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/containers/kube-ap
iserver/6df491f2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":f
alse}]","io.kubernetes.pod.name":"kube-apiserver-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b5ccb738eb1160dc60c2973028d04964","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b5ccb738eb1160dc60c2973028d04964","kubernetes.io/config.seen":"2025-09-17T00:32:59.669167256Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","pid":894,"status":"running","bundle":"/run/containers/storage/overlay-containers/c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3/userdata","rootfs":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a
9bffec85a2a35b5e8e008790d2da1/merged","created":"2025-09-17T00:33:00.274952825Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID"
:"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.203434002Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"74a9cbd6392d4b9acfdd053de2761cb8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-671025_74a9cbd6392d4b9acfdd053de2761cb8/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a9bffec85
a2a35b5e8e008790d2da1/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/containers/kube
-scheduler/513703c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.hash":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.seen":"2025-09-17T00:32:59.669170685Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","pid":914,"status":"running","bundle":"/run/containers/storage/overlay-contai
ners/dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c/userdata","rootfs":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","created":"2025-09-17T00:33:00.286793858Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/d
ev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.204654096Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8d1e0f98935496199c8e8278a2410d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-671025_8d1e0f98935496199c8e8278a2410d09/kube-c
ontroller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/
etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/containers/kube-controller-manager/7587fc8c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"ho
st_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.hash":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.seen":"2025-09-17T00:32:59.669169006Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.system
d.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","pid":809,"status":"running","bundle":"/run/containers/storage/overlay-containers/feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15/userdata","rootfs":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","created":"2025-09-17T00:33:00.227524758Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\
\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.156861142Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"629bf94aa
8286a4aae957269fae7c79b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-671025_629bf94aa8286a4aae957269fae7c79b/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\
":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/containers/etcd/188c438f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"629bf94aa8286a4aae957269fae7c79b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"629bf94aa8286a4aae957269fae7c79b",
"kubernetes.io/config.seen":"2025-09-17T00:32:59.669161890Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0917 00:33:00.733792  619438 cri.go:126] list returned 5 containers
	I0917 00:33:00.733811  619438 cri.go:129] container: {ID:3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 Status:running}
	I0917 00:33:00.733830  619438 cri.go:135] skipping {3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 running}: state = "running", want "paused"
	I0917 00:33:00.733846  619438 cri.go:129] container: {ID:3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da Status:running}
	I0917 00:33:00.733857  619438 cri.go:135] skipping {3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da running}: state = "running", want "paused"
	I0917 00:33:00.733867  619438 cri.go:129] container: {ID:c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 Status:running}
	I0917 00:33:00.733875  619438 cri.go:135] skipping {c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 running}: state = "running", want "paused"
	I0917 00:33:00.733884  619438 cri.go:129] container: {ID:dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c Status:running}
	I0917 00:33:00.733891  619438 cri.go:135] skipping {dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c running}: state = "running", want "paused"
	I0917 00:33:00.733906  619438 cri.go:129] container: {ID:feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 Status:running}
	I0917 00:33:00.733915  619438 cri.go:135] skipping {feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 running}: state = "running", want "paused"
	I0917 00:33:00.733967  619438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:33:00.743818  619438 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:33:00.743842  619438 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:33:00.743896  619438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:33:00.753049  619438 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:00.753478  619438 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671025" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.753570  619438 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-517646/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671025" cluster setting kubeconfig missing "ha-671025" context setting]
	I0917 00:33:00.753860  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.754368  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:33:00.754887  619438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:33:00.754902  619438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:33:00.754906  619438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:33:00.754911  619438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:33:00.754914  619438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:33:00.754984  619438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:33:00.755286  619438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:33:00.764691  619438 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:33:00.764721  619438 kubeadm.go:593] duration metric: took 20.872209ms to restartPrimaryControlPlane
	I0917 00:33:00.764732  619438 kubeadm.go:394] duration metric: took 94.344936ms to StartCluster
	I0917 00:33:00.764754  619438 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.764829  619438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.765434  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.765678  619438 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:00.765703  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:00.765712  619438 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:33:00.765954  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.768475  619438 out.go:179] * Enabled addons: 
	I0917 00:33:00.769396  619438 addons.go:514] duration metric: took 3.672053ms for enable addons: enabled=[]
	I0917 00:33:00.769427  619438 start.go:246] waiting for cluster config update ...
	I0917 00:33:00.769435  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:00.770640  619438 out.go:203] 
	I0917 00:33:00.771782  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.771882  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.773295  619438 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:33:00.774266  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:00.775272  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:00.776246  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:00.776270  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:00.776303  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:00.776369  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:00.776383  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:00.776522  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.798181  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:00.798201  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:00.798221  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:00.798259  619438 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:00.798335  619438 start.go:364] duration metric: took 52.828µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:33:00.798366  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:00.798404  619438 fix.go:54] fixHost starting: m02
	I0917 00:33:00.798630  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:00.816952  619438 fix.go:112] recreateIfNeeded on ha-671025-m02: state=Stopped err=<nil>
	W0917 00:33:00.816988  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:00.818588  619438 out.go:252] * Restarting existing docker container for "ha-671025-m02" ...
	I0917 00:33:00.818663  619438 cli_runner.go:164] Run: docker start ha-671025-m02
	I0917 00:33:01.089289  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:01.112171  619438 kic.go:430] container "ha-671025-m02" state is running.
	I0917 00:33:01.112607  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:01.134692  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:01.134992  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:01.135064  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:01.156210  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:01.156564  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:01.156582  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:01.157427  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34164->127.0.0.1:33183: read: connection reset by peer
	I0917 00:33:04.296769  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.296809  619438 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:33:04.296905  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.315073  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.315310  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.315323  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:33:04.466025  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.466110  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.484268  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.484535  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.484554  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:04.621439  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:04.621482  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:04.621501  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:04.621511  619438 provision.go:84] configureAuth start
	I0917 00:33:04.621573  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:04.640283  619438 provision.go:143] copyHostCerts
	I0917 00:33:04.640335  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640368  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:04.640383  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640480  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:04.640601  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640634  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:04.640652  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640698  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:04.640784  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640809  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:04.640818  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640852  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:04.640942  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:33:04.733693  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:04.733759  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:04.733809  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.752499  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:04.850462  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:04.850518  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:04.876387  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:04.876625  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:04.904017  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:04.904091  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:04.932067  619438 provision.go:87] duration metric: took 310.54132ms to configureAuth
	I0917 00:33:04.932114  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:04.932333  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:04.932519  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.950911  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.951173  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.951192  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:13.583717  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:13.583742  619438 machine.go:96] duration metric: took 12.448736712s to provisionDockerMachine
	I0917 00:33:13.583754  619438 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:33:13.583768  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:13.583844  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:13.583889  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.602374  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.704271  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:13.709862  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:13.709910  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:13.709921  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:13.709930  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:13.709945  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:13.710027  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:13.710128  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:13.710138  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:13.710258  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:13.726542  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:13.762021  619438 start.go:296] duration metric: took 178.248287ms for postStartSetup
	I0917 00:33:13.762146  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:13.762202  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.785807  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.885926  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:13.890781  619438 fix.go:56] duration metric: took 13.092394555s for fixHost
	I0917 00:33:13.890814  619438 start.go:83] releasing machines lock for "ha-671025-m02", held for 13.092464098s
	I0917 00:33:13.890888  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:13.912194  619438 out.go:179] * Found network options:
	I0917 00:33:13.913617  619438 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:33:13.914820  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:13.914864  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:13.914934  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:13.914975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.915050  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:13.915121  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.935804  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.936030  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:14.188511  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:14.195453  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.211117  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:14.211201  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.227642  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:14.227708  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:14.227849  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:14.227922  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:14.251293  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:14.271238  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:14.271313  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:14.288904  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:14.307961  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:14.437900  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:14.545190  619438 docker.go:234] disabling docker service ...
	I0917 00:33:14.545281  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:14.560872  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:14.573584  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:14.680197  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:14.811100  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:14.825885  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:14.847059  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:14.847127  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.859808  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:14.859899  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.871797  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.883328  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.896664  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:14.907675  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.918906  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.929358  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.941273  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:14.953043  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:14.967648  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:15.083218  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:21.777437  619438 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.694178293s)
	I0917 00:33:21.777485  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:21.777539  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:21.781615  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:21.781681  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:21.785837  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:21.828119  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:21.828217  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.874252  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.916319  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:21.917788  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:21.918929  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:21.938354  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:21.942655  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:21.956120  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:21.956460  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:21.956800  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:21.976493  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:21.976752  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:33:21.976765  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:21.976779  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:21.976919  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:21.976970  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:21.976980  619438 certs.go:256] generating profile certs ...
	I0917 00:33:21.977105  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:21.977160  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.289f7349
	I0917 00:33:21.977201  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:21.977214  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:21.977226  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:21.977238  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:21.977248  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:21.977263  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:21.977277  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:21.977292  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:21.977304  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:21.977348  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:21.977374  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:21.977384  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:21.977437  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:21.977468  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:21.977488  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:21.977537  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:21.977566  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:21.977579  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:21.977591  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:21.977641  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:21.996033  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:22.086756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:22.091430  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:22.105578  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:22.109474  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:22.123413  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:22.127015  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:22.140675  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:22.145374  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:22.160202  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:22.164648  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:22.179040  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:22.182820  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:22.197252  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:22.226621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:22.255420  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:22.284497  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:22.313100  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:22.339570  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:22.368270  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:22.395836  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:22.424911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:22.451321  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:22.479698  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:22.509017  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:22.530192  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:22.550277  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:22.570982  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:22.591763  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:22.615610  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:22.637548  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:22.660728  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:22.668525  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:22.679921  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684865  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684929  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.692513  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:22.703651  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:22.716758  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721573  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721639  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.729408  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:22.740799  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:22.754481  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759515  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759591  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.769873  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:22.780940  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:22.785123  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:22.792739  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:22.800305  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:22.808094  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:22.815985  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:22.823772  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:22.830968  619438 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:33:22.831108  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:22.831135  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:22.831174  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:22.845445  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:22.845549  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:22.845617  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:22.856831  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:22.856928  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:22.867889  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:22.888469  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:22.908498  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:22.929249  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:22.933575  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:22.945785  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.049186  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.063035  619438 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:23.063337  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.065109  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:23.066721  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.162455  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.176145  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:23.176215  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:23.176479  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185303  619438 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:33:23.185333  619438 node_ready.go:38] duration metric: took 8.819618ms for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185350  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:23.185420  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:23.197637  619438 api_server.go:72] duration metric: took 134.535244ms to wait for apiserver process to appear ...
	I0917 00:33:23.197672  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:23.197693  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:23.202879  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:23.204114  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:23.204224  619438 api_server.go:131] duration metric: took 6.534103ms to wait for apiserver health ...
	I0917 00:33:23.204244  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:23.211681  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:23.211742  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211758  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211769  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.211777  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.211783  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.211792  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.211798  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.211807  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.211816  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.211822  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.211829  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.211836  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.211844  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.211850  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.211859  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.211867  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.211875  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.211881  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.211888  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.211896  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.211902  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.211907  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.211913  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.211919  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.211928  619438 system_pods.go:74] duration metric: took 7.670911ms to wait for pod list to return data ...
	I0917 00:33:23.211942  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:23.215282  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:23.215305  619438 default_sa.go:55] duration metric: took 3.354164ms for default service account to be created ...
	I0917 00:33:23.215314  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:23.220686  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:23.220721  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220730  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220737  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.220741  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.220745  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.220750  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.220753  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.220759  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.220763  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.220768  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.220771  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.220774  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.220778  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.220782  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.220786  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.220790  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.220793  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.220796  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.220800  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.220803  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.220806  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.220808  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.220812  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.220816  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.220822  619438 system_pods.go:126] duration metric: took 5.503704ms to wait for k8s-apps to be running ...
	I0917 00:33:23.220830  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:23.220878  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:23.233344  619438 system_svc.go:56] duration metric: took 12.501522ms WaitForService to wait for kubelet
	I0917 00:33:23.233378  619438 kubeadm.go:578] duration metric: took 170.282ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:23.233426  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:23.237203  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237235  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237249  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237253  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237258  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237263  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237268  619438 node_conditions.go:105] duration metric: took 3.836923ms to run NodePressure ...
	I0917 00:33:23.237281  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:23.237310  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:23.239362  619438 out.go:203] 
	I0917 00:33:23.240662  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.240787  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.242255  619438 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:33:23.243650  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:23.244785  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:23.245985  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:23.246015  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:23.246076  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:23.246103  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:23.246111  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:23.246237  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.267677  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:23.267698  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:23.267719  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:23.267746  619438 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:23.267801  619438 start.go:364] duration metric: took 38.266µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:33:23.267818  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:23.267825  619438 fix.go:54] fixHost starting: m03
	I0917 00:33:23.268049  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.286470  619438 fix.go:112] recreateIfNeeded on ha-671025-m03: state=Stopped err=<nil>
	W0917 00:33:23.286501  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:23.288337  619438 out.go:252] * Restarting existing docker container for "ha-671025-m03" ...
	I0917 00:33:23.288444  619438 cli_runner.go:164] Run: docker start ha-671025-m03
	I0917 00:33:23.539232  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.559852  619438 kic.go:430] container "ha-671025-m03" state is running.
	I0917 00:33:23.560281  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:23.582181  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.582448  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:23.582512  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:23.603240  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:23.603508  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:23.603524  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:23.604268  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54628->127.0.0.1:33188: read: connection reset by peer
	I0917 00:33:26.756053  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.756095  619438 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:33:26.756163  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.775553  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.775816  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.775832  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:33:26.929724  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.929811  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.948952  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.949181  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.949199  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:27.097686  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:27.097724  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:27.097808  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:27.097838  619438 provision.go:84] configureAuth start
	I0917 00:33:27.097905  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:27.124607  619438 provision.go:143] copyHostCerts
	I0917 00:33:27.124661  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124704  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:27.124712  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124796  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:27.124902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124927  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:27.124938  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124978  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:27.125071  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125093  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:27.125097  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125123  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:27.125202  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:33:27.491028  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:27.491103  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:27.491153  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.510894  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:27.621913  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:27.621991  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:27.659332  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:27.659436  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:27.694265  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:27.694331  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:27.729012  619438 provision.go:87] duration metric: took 631.150589ms to configureAuth
	I0917 00:33:27.729044  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:27.729332  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:27.729498  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.752375  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:27.752667  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:27.752694  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:28.163571  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:28.163606  619438 machine.go:96] duration metric: took 4.581141061s to provisionDockerMachine
	I0917 00:33:28.163625  619438 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:33:28.163636  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:28.163694  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:28.163736  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.183221  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.282370  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:28.286033  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:28.286069  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:28.286080  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:28.286089  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:28.286103  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:28.286167  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:28.286260  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:28.286273  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:28.286385  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:28.296210  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:28.323607  619438 start.go:296] duration metric: took 159.96344ms for postStartSetup
	I0917 00:33:28.323744  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:28.323801  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.341948  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.437100  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:28.442217  619438 fix.go:56] duration metric: took 5.174381535s for fixHost
	I0917 00:33:28.442251  619438 start.go:83] releasing machines lock for "ha-671025-m03", held for 5.17444003s
	I0917 00:33:28.442339  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:28.462490  619438 out.go:179] * Found network options:
	I0917 00:33:28.463995  619438 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:33:28.465339  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465379  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465437  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465456  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:28.465540  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:28.465604  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.465608  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:28.465666  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.484618  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.484954  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.729938  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:28.735367  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.746253  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:28.746345  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.757317  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:28.757344  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:28.757382  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:28.757457  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:28.772308  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:28.784900  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:28.784967  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:28.800003  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:28.812730  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:28.927855  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:29.059441  619438 docker.go:234] disabling docker service ...
	I0917 00:33:29.059519  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:29.078537  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:29.093278  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:29.210953  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:29.324946  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:29.337107  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:29.355136  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:29.355186  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.366142  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:29.366211  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.378355  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.389105  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.399699  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:29.409712  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.420697  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.430508  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.440921  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:29.450466  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:29.459577  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:29.574875  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:29.816990  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:29.817095  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:29.821723  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:29.821780  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:29.825613  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:29.861449  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:29.861530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.917974  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.959407  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:29.960768  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:29.962037  619438 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:33:29.963347  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:29.990529  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:29.995062  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.007594  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:30.007810  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:30.008007  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:30.028172  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:30.028488  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:33:30.028502  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:30.028518  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:30.028667  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:30.028724  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:30.028738  619438 certs.go:256] generating profile certs ...
	I0917 00:33:30.028835  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:30.028918  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:33:30.028969  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:30.028985  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:30.029006  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:30.029022  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:30.029039  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:30.029053  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:30.029066  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:30.029085  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:30.029109  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:30.029181  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:30.029228  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:30.029241  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:30.029285  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:30.029320  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:30.029350  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:30.029418  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:30.029458  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.029480  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.029497  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.029570  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:30.048859  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:30.137756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:30.142385  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:30.157058  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:30.161473  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:30.176759  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:30.180509  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:30.193674  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:30.197197  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:30.210232  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:30.214138  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:30.227500  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:30.231351  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:30.244274  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:30.271911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:30.299112  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:30.326476  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:30.352993  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:30.380621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:30.406324  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:30.432139  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:30.458308  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:30.483817  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:30.509827  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:30.537659  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:30.557593  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:30.577579  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:30.597023  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:30.617353  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:30.636531  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:30.656268  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:30.676462  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:30.682486  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:30.693023  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696932  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696986  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.704184  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:30.714256  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:30.725254  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.728941  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.729013  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.736673  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:30.746358  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:30.757231  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761269  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761351  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.768689  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:30.779054  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:30.783069  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:30.790436  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:30.797491  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:30.804684  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:30.811602  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:30.818603  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:30.825614  619438 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:33:30.825731  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:30.825755  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:30.825793  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:30.839517  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:30.839587  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:30.839637  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:30.849197  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:30.849283  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:30.859805  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:30.879168  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:30.898461  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:30.918131  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:30.922054  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.934606  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.047135  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.060828  619438 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:31.061141  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.063169  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:31.064429  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.179306  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.194472  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:31.194609  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:31.194890  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198458  619438 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:33:31.198488  619438 node_ready.go:38] duration metric: took 3.579476ms for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198503  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:31.198550  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:31.212138  619438 api_server.go:72] duration metric: took 151.254038ms to wait for apiserver process to appear ...
	I0917 00:33:31.212172  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:31.212199  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:31.217814  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:31.218774  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:31.218795  619438 api_server.go:131] duration metric: took 6.616763ms to wait for apiserver health ...
	I0917 00:33:31.218803  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:31.225098  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:31.225134  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225141  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225149  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.225155  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.225163  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.225168  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.225177  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.225185  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.225190  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.225199  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.225205  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.225209  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.225213  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.225219  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.225225  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.225228  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.225231  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.225235  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.225237  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.225242  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.225247  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.225250  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.225253  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.225255  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.225261  619438 system_pods.go:74] duration metric: took 6.452715ms to wait for pod list to return data ...
	I0917 00:33:31.225280  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:31.228376  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:31.228411  619438 default_sa.go:55] duration metric: took 3.119992ms for default service account to be created ...
	I0917 00:33:31.228422  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:31.233445  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:31.233478  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233487  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233491  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.233495  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.233501  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.233504  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.233508  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.233511  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.233517  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.233523  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.233529  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.233535  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.233540  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.233548  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.233555  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.233559  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.233566  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.233570  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.233576  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.233581  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.233587  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.233590  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.233596  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.233599  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.233605  619438 system_pods.go:126] duration metric: took 5.178303ms to wait for k8s-apps to be running ...
	I0917 00:33:31.233615  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:31.233661  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:31.246667  619438 system_svc.go:56] duration metric: took 13.0386ms WaitForService to wait for kubelet
	I0917 00:33:31.246701  619438 kubeadm.go:578] duration metric: took 185.824043ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:31.246730  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:31.250636  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250665  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250679  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250684  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250690  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250694  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250700  619438 node_conditions.go:105] duration metric: took 3.96358ms to run NodePressure ...
	I0917 00:33:31.250716  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:31.250743  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:31.253191  619438 out.go:203] 
	I0917 00:33:31.255560  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.255716  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.257849  619438 out.go:179] * Starting "ha-671025-m04" worker node in "ha-671025" cluster
	I0917 00:33:31.259401  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:31.260716  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:31.262230  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:31.262264  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:31.262330  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:31.262386  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:31.262432  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:31.262581  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.285684  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:31.285706  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:31.285722  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:31.285751  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:31.285824  619438 start.go:364] duration metric: took 55.532µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:33:31.285843  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:31.285851  619438 fix.go:54] fixHost starting: m04
	I0917 00:33:31.286063  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.305028  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:33:31.305061  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:31.307579  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:33:31.307671  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:33:31.575879  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.595646  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:33:31.596093  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:33:31.616747  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.617092  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:31.617170  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:33:31.636573  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:31.636791  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0917 00:33:31.636802  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:31.637630  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36226->127.0.0.1:33193: read: connection reset by peer
	I0917 00:33:34.638709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:37.640910  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:40.643532  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:43.644441  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:46.646832  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:49.647727  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:52.649735  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:55.650690  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:58.651030  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:01.651344  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:04.652841  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:07.653174  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:10.655161  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:13.656284  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:16.658064  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:19.658720  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:22.660831  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:25.661743  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:28.662460  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:31.663366  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:34.664358  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:37.666715  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:40.668752  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:43.669135  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:46.670730  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:49.671672  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:52.673038  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:55.674872  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:58.675353  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:01.676728  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:04.677624  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:07.680078  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:10.681718  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:13.682700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:16.684701  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:19.686235  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:22.687651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:25.689778  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:28.690485  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:31.691549  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:34.692838  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:37.695306  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:40.697845  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:43.698429  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:46.700789  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:49.701639  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:52.702370  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:55.704673  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:58.705496  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:01.706733  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:04.708175  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:07.709697  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:10.712190  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:13.713347  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:16.715721  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:19.716893  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:22.718572  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:25.720700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:28.721777  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:31.722479  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:36:31.722518  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:36:31.722607  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.744520  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.744620  619438 machine.go:96] duration metric: took 3m0.127509973s to provisionDockerMachine
	I0917 00:36:31.744723  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:36:31.744770  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.764601  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.764736  619438 retry.go:31] will retry after 288.945807ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.054420  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.074595  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.074728  619438 retry.go:31] will retry after 272.369407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.348309  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.368462  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.368608  619438 retry.go:31] will retry after 744.516266ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.113868  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.133032  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.133163  619438 retry.go:31] will retry after 492.951246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.626619  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.647357  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:33.647505  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:33.647528  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.647587  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:36:33.647631  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.666215  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.666338  619438 retry.go:31] will retry after 272.675779ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.939657  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.958470  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.958588  619438 retry.go:31] will retry after 525.446207ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:34.484331  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:34.504346  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:34.504492  619438 retry.go:31] will retry after 588.594219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.093370  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:35.116893  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:35.117042  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117086  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117113  619438 fix.go:56] duration metric: took 3m3.831261756s for fixHost
	I0917 00:36:35.117126  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.831291336s
	W0917 00:36:35.117142  619438 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117240  619438 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117254  619438 start.go:729] Will try again in 5 seconds ...
	I0917 00:36:40.118524  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:36:40.118656  619438 start.go:364] duration metric: took 88.188µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:36:40.118689  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:36:40.118698  619438 fix.go:54] fixHost starting: m04
	I0917 00:36:40.119106  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.139538  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:36:40.139579  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:36:40.141549  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:36:40.141624  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:36:40.412862  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.433322  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:36:40.433799  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:36:40.453513  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:36:40.453934  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:36:40.454059  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:36:40.473978  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:40.474315  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0917 00:36:40.474331  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:36:40.475099  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33606->127.0.0.1:33198: read: connection reset by peer
	I0917 00:36:43.475724  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:46.476660  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:49.478345  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:52.479547  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:55.482132  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:58.483337  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:01.484607  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:04.485839  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:07.487714  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:10.489661  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:13.490227  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:16.492090  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:19.492645  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:22.493651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:25.495677  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:28.496275  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:31.497224  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:34.497736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:37.499709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:40.502218  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:43.502692  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:46.504930  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:49.506113  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:52.506643  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:55.507569  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:58.507989  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:01.508674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:04.509297  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:07.511674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:10.512110  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:13.512683  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:16.515058  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:19.516277  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:22.517225  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:25.519308  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:28.519717  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:31.520615  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:34.522114  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:37.523670  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:40.526331  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:43.527374  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:46.529741  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:49.531301  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:52.532585  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:55.533793  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:58.534231  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:01.534621  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:04.536103  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:07.538458  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:10.540484  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:13.541711  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:16.543992  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:19.545340  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:22.546576  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:25.548676  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:28.549734  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:31.550736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:34.551691  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:37.553774  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:40.555606  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:39:40.555645  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:39:40.555731  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.576194  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.576295  619438 machine.go:96] duration metric: took 3m0.122321612s to provisionDockerMachine
	I0917 00:39:40.576379  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:39:40.576440  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.595844  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.595977  619438 retry.go:31] will retry after 334.138339ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:40.931319  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.951370  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.951504  619438 retry.go:31] will retry after 347.147392ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.299070  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.319717  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.319850  619438 retry.go:31] will retry after 612.672267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.933618  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.954663  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:41.954778  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:41.954797  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.954845  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:39:41.954878  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.975511  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.975621  619438 retry.go:31] will retry after 279.089961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.255093  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.275630  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.275759  619438 retry.go:31] will retry after 427.799265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.704460  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.723085  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.723291  619438 retry.go:31] will retry after 748.226264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.472625  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:43.493097  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:43.493238  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.493260  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.493279  619438 fix.go:56] duration metric: took 3m3.3745821s for fixHost
	I0917 00:39:43.493294  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.374622198s
	W0917 00:39:43.493451  619438 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-671025" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.495244  619438 out.go:203] 
	W0917 00:39:43.496536  619438 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.496558  619438 out.go:285] * 
	W0917 00:39:43.498254  619438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:39:43.499426  619438 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 00:33:14 ha-671025 crio[565]: time="2025-09-17 00:33:14.250668570Z" level=info msg="Started container" PID=1371 containerID=0a6ec806f09b0ec6cd3c05e4e3ae47a201470e8dd91c163a0a50e778942c1fdf description=kube-system/coredns-66bc5c9577-vfj56/coredns id=e249fce6-f4cd-4113-83e0-50d04adcc10f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b722ecf2f3e80164bf38e495945b2f9de2da062098248c531372f1254b04027
	Sep 17 00:33:14 ha-671025 crio[565]: time="2025-09-17 00:33:14.254529988Z" level=info msg="Started container" PID=1357 containerID=0f6f22dfaf3f5c42ab834fbdacc268222b9381892b372e6c6777b8cdc48ae94d description=kube-system/kube-proxy-f58dt/kube-proxy id=a0f2eb2e-8af2-4dfd-a58a-1737b5f99d21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86370afe3da8daa2b358bfa93e3418e66144d35d035fed0a638a50924fa59408
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.753340587Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758517303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758557932Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758575572Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.764982577Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.765047831Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.765068425Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769374951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769549150Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769575818Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.773978219Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.774011909Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.807516826Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed976c02-d574-4c82-bfc5-c9beb8325877 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.807738230Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ed976c02-d574-4c82-bfc5-c9beb8325877 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.808425117Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f7b84450-4a24-4619-b6df-a4e028fc709d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.808644322Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f7b84450-4a24-4619-b6df-a4e028fc709d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.809516747Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f7135108-062d-4210-941f-2121b4150437 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.809630183Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.824058373Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e4482785ad19323f369936fbb4daa43031f78405e411d03a635704ce0b9bfa42/merged/etc/passwd: no such file or directory"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.824101095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e4482785ad19323f369936fbb4daa43031f78405e411d03a635704ce0b9bfa42/merged/etc/group: no such file or directory"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.883592079Z" level=info msg="Created container ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9: kube-system/storage-provisioner/storage-provisioner" id=f7135108-062d-4210-941f-2121b4150437 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.884330281Z" level=info msg="Starting container: ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9" id=e3034afa-a009-4659-9e70-4826d4a036d3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.892093157Z" level=info msg="Started container" PID=1755 containerID=ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9 description=kube-system/storage-provisioner/storage-provisioner id=e3034afa-a009-4659-9e70-4826d4a036d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84705f66b6f00fabea4a34fd2340cb783d9fd23e696a1d70dfe64392537e0e17
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecf22eec47271       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       3                   84705f66b6f00       storage-provisioner
	0a6ec806f09b0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   3b722ecf2f3e8       coredns-66bc5c9577-vfj56
	911039394b566       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   0d31993e30b9d       busybox-7b57f96db7-wj4r5
	0f6f22dfaf3f5       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   6 minutes ago       Running             kube-proxy                1                   86370afe3da8d       kube-proxy-f58dt
	d8a3a53722ee7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               1                   573be4d17bc4c       kindnet-9zvhz
	79c32235f9c36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       2                   84705f66b6f00       storage-provisioner
	1151cd93da2ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   4c29d74d630f3       coredns-66bc5c9577-mqh24
	dd21b88addb23       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   6 minutes ago       Running             kube-controller-manager   1                   17b3a59f2d7b6       kube-controller-manager-ha-671025
	c7b95b9bb5f9d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   6 minutes ago       Running             kube-scheduler            1                   0d6a7ac1856cb       kube-scheduler-ha-671025
	3fa5cc179a477       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   6 minutes ago       Running             kube-apiserver            1                   c0bb4371ed6c8       kube-apiserver-ha-671025
	3a99a51aacd42       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   6 minutes ago       Running             kube-vip                  0                   aca3020b8c9d0       kube-vip-ha-671025
	feb54ecd21790       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 minutes ago       Running             etcd                      1                   ff786868f6409       etcd-ha-671025
	
	
	==> coredns [0a6ec806f09b0ec6cd3c05e4e3ae47a201470e8dd91c163a0a50e778942c1fdf] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41081 - 22204 "HINFO IN 3438997292128027948.7850884943177890662. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020285532s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [1151cd93da2add1289085967f6fd11dca725fe05835ee8882364ce8ef4d5c1d9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34114 - 63412 "HINFO IN 8932016049737155266.1565975528977438817. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04450606s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:39:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ed2fe35b45d401da396432da19b49e7
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 6m43s                  kube-proxy       
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           8m36s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  Starting                 6m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m59s (x8 over 6m59s)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m59s (x8 over 6m59s)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m59s (x8 over 6m59s)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m43s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           6m43s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:39:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 34a83f19fcce42489e31c52ddb1f71d8
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  NodeHasNoDiskPressure    8m42s (x8 over 8m42s)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m42s (x8 over 8m42s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m42s (x8 over 8m42s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m36s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  Starting                 6m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m57s (x8 over 6m57s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m57s (x8 over 6m57s)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m57s (x8 over 6m57s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m43s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           6m43s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15] <==
	{"level":"info","ts":"2025-09-17T00:33:24.731280Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:33:25.373568Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"58f1161d61ce118","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:33:25.373669Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"58f1161d61ce118","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:39:49.658319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:39:49.685926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39528","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:39:49.695161Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 13140772435598162251)"}
	{"level":"info","ts":"2025-09-17T00:39:49.698056Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"58f1161d61ce118","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-17T00:39:49.698118Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698201Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698275Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698272Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.698300Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698547Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.698622Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.698655Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698795Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","error":"context canceled"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698837Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"58f1161d61ce118","error":"failed to read 58f1161d61ce118 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-17T00:39:49.698865Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.699000Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:39:49.699036Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.699045Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.699059Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.699122Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.706432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:39:49.706719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46134","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:39:58 up  3:22,  0 users,  load average: 0.39, 0.63, 3.18
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d8a3a53722ee71de725c2794a050878da7894fbc523bb6bac8efe7e38865e48e] <==
	I0917 00:39:14.752917       1 main.go:301] handling current node
	I0917 00:39:14.752930       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:14.752934       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:24.755944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:24.755981       1 main.go:301] handling current node
	I0917 00:39:24.755998       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:24.756003       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:24.756183       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:24.756192       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:34.760510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:34.760554       1 main.go:301] handling current node
	I0917 00:39:34.760573       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:34.760579       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:34.760773       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:34.760789       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:44.756468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:44.756507       1 main.go:301] handling current node
	I0917 00:39:44.756526       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:44.756532       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:44.756690       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:44.756700       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:54.752279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:54.752346       1 main.go:301] handling current node
	I0917 00:39:54.752365       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:54.752371       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da] <==
	I0917 00:33:12.203011       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:33:12.204560       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:33:12.215378       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0917 00:33:12.225713       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0917 00:33:12.225748       1 policy_source.go:240] refreshing policies
	E0917 00:33:12.257458       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 00:33:12.275512       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:33:13.102620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 00:33:13.467644       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0917 00:33:13.469377       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:33:13.475334       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 00:33:13.710304       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:33:15.400126       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:33:15.451962       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:33:15.550108       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:34:30.180357       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:34:36.295135       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:35:58.087614       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:36:04.861775       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:37:09.469711       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:37:20.231944       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:38:10.023844       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:38:42.747905       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:39:27.376187       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:39:48.847821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c] <==
	I0917 00:33:15.048114       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0917 00:33:15.050103       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:33:15.050156       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:33:15.050198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0917 00:33:15.051603       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:33:15.052580       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0917 00:33:15.052596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:33:15.052656       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0917 00:33:15.052705       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0917 00:33:15.052712       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0917 00:33:15.052716       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0917 00:33:15.072139       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:33:15.074323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:33:15.079457       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:33:15.079609       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:33:15.079783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m02"
	I0917 00:33:15.079806       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025"
	I0917 00:33:15.079783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m03"
	I0917 00:33:15.079891       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E0917 00:39:46.559964       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0917 00:39:55.062164       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062205       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062211       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062216       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062220       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	
	
	==> kube-proxy [0f6f22dfaf3f5c42ab834fbdacc268222b9381892b372e6c6777b8cdc48ae94d] <==
	I0917 00:33:14.310969       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:33:14.385159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:33:14.485410       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:33:14.485454       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:33:14.485579       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:33:14.505543       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:33:14.505612       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:33:14.510944       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:33:14.511517       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:33:14.511559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:33:14.512935       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:33:14.512967       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:33:14.513038       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:33:14.513032       1 config.go:200] "Starting service config controller"
	I0917 00:33:14.513056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:33:14.513059       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:33:14.513068       1 config.go:309] "Starting node config controller"
	I0917 00:33:14.513103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:33:14.513111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:33:14.613338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:33:14.613363       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:33:14.613385       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3] <==
	I0917 00:33:01.038603       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:33:11.582258       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0917 00:33:11.582299       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:33:11.582308       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:33:12.169895       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:33:12.169942       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:33:12.174415       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:33:12.174635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:33:12.174667       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:33:12.174692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:33:12.274752       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:37:49 ha-671025 kubelet[719]: E0917 00:37:49.715267     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069469714967932  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:37:59 ha-671025 kubelet[719]: E0917 00:37:59.717124     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069479716902039  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:37:59 ha-671025 kubelet[719]: E0917 00:37:59.717155     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069479716902039  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:09 ha-671025 kubelet[719]: E0917 00:38:09.719199     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069489718952365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:09 ha-671025 kubelet[719]: E0917 00:38:09.719231     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069489718952365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:19 ha-671025 kubelet[719]: E0917 00:38:19.720791     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069499720508720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:19 ha-671025 kubelet[719]: E0917 00:38:19.720832     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069499720508720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:29 ha-671025 kubelet[719]: E0917 00:38:29.722482     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069509722189753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:29 ha-671025 kubelet[719]: E0917 00:38:29.722526     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069509722189753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:39 ha-671025 kubelet[719]: E0917 00:38:39.724772     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069519724406774  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:39 ha-671025 kubelet[719]: E0917 00:38:39.724820     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069519724406774  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:49 ha-671025 kubelet[719]: E0917 00:38:49.726218     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069529725971912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:49 ha-671025 kubelet[719]: E0917 00:38:49.726259     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069529725971912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:59 ha-671025 kubelet[719]: E0917 00:38:59.727787     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069539727493186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:59 ha-671025 kubelet[719]: E0917 00:38:59.727827     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069539727493186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:09 ha-671025 kubelet[719]: E0917 00:39:09.729035     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069549728835025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:09 ha-671025 kubelet[719]: E0917 00:39:09.729066     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069549728835025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:19 ha-671025 kubelet[719]: E0917 00:39:19.730347     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069559730086423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:19 ha-671025 kubelet[719]: E0917 00:39:19.730386     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069559730086423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:29 ha-671025 kubelet[719]: E0917 00:39:29.731647     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069569731379538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:29 ha-671025 kubelet[719]: E0917 00:39:29.731688     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069569731379538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:39 ha-671025 kubelet[719]: E0917 00:39:39.732899     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069579732705681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:39 ha-671025 kubelet[719]: E0917 00:39:39.732940     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069579732705681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:49 ha-671025 kubelet[719]: E0917 00:39:49.734288     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069589734023750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:49 ha-671025 kubelet[719]: E0917 00:39:49.734515     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069589734023750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-vmzxx
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-671025 describe pod busybox-7b57f96db7-vmzxx
helpers_test.go:290: (dbg) kubectl --context ha-671025 describe pod busybox-7b57f96db7-vmzxx:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-vmzxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gsm85 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-gsm85:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  12s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (13.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-671025" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-671025\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-671025\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares
\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-671025\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":
\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry
-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":
\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 619633,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:32:53.286176868Z",
	            "FinishedAt": "2025-09-17T00:32:52.645586403Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e88ab0b1cbcc741c291833bfdeaa68e46e3b5db9345dc0aa90d473d7f1955a0",
	            "SandboxKey": "/var/run/docker/netns/3e88ab0b1cbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33180"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:78:32:58:80:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "62110bd5e439ab2c08160ae7846f5c9267265e2e870f01c3985d76fb403512f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.232832855s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt                                                            │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ node    │ ha-671025 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node start m02 --alsologtostderr -v 5                                                                                     │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:31 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ stop    │ ha-671025 stop --alsologtostderr -v 5                                                                                               │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │ 17 Sep 25 00:32 UTC │
	│ start   │ ha-671025 start --wait true --alsologtostderr -v 5                                                                                  │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:39 UTC │                     │
	│ node    │ ha-671025 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:39 UTC │ 17 Sep 25 00:39 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:32:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:32:53.048533  619438 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:32:53.048790  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.048798  619438 out.go:374] Setting ErrFile to fd 2...
	I0917 00:32:53.048801  619438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:32:53.049018  619438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:32:53.049513  619438 out.go:368] Setting JSON to false
	I0917 00:32:53.050516  619438 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11716,"bootTime":1758057457,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:32:53.050646  619438 start.go:140] virtualization: kvm guest
	I0917 00:32:53.052823  619438 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:32:53.054178  619438 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:32:53.054271  619438 notify.go:220] Checking for updates...
	I0917 00:32:53.056434  619438 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:32:53.057686  619438 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:32:53.058908  619438 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:32:53.060062  619438 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:32:53.061204  619438 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:32:53.062799  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:53.062904  619438 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:32:53.089453  619438 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:32:53.089539  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.148341  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.138207862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.148496  619438 docker.go:318] overlay module found
	I0917 00:32:53.150179  619438 out.go:179] * Using the docker driver based on existing profile
	I0917 00:32:53.151230  619438 start.go:304] selected driver: docker
	I0917 00:32:53.151250  619438 start.go:918] validating driver "docker" against &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.151427  619438 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:32:53.151523  619438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:32:53.207764  619438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:32:53.197259177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:32:53.208608  619438 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:32:53.208644  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:53.208723  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:53.208799  619438 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:32:53.210881  619438 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:32:53.212367  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:32:53.213541  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:32:53.214652  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:53.214718  619438 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:32:53.214729  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:32:53.214774  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:32:53.214807  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:32:53.214815  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:32:53.214955  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.239640  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:32:53.239670  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:32:53.239694  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:32:53.239727  619438 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:32:53.239821  619438 start.go:364] duration metric: took 66.466µs to acquireMachinesLock for "ha-671025"
	I0917 00:32:53.239847  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:32:53.239857  619438 fix.go:54] fixHost starting: 
	I0917 00:32:53.240183  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.258645  619438 fix.go:112] recreateIfNeeded on ha-671025: state=Stopped err=<nil>
	W0917 00:32:53.258676  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:32:53.260365  619438 out.go:252] * Restarting existing docker container for "ha-671025" ...
	I0917 00:32:53.260462  619438 cli_runner.go:164] Run: docker start ha-671025
	I0917 00:32:53.507970  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:32:53.529432  619438 kic.go:430] container "ha-671025" state is running.
	I0917 00:32:53.530679  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:53.550608  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:32:53.550906  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:32:53.551014  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:53.571235  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:53.571518  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:53.571532  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:32:53.572179  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48548->127.0.0.1:33178: read: connection reset by peer
	I0917 00:32:56.710627  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.710663  619438 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:32:56.710724  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.729879  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.730123  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.730136  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:32:56.882161  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:32:56.882256  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:56.901113  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:56.901437  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:56.901465  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:32:57.039832  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:32:57.039868  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:32:57.039923  619438 ubuntu.go:190] setting up certificates
	I0917 00:32:57.039945  619438 provision.go:84] configureAuth start
	I0917 00:32:57.040038  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:57.059654  619438 provision.go:143] copyHostCerts
	I0917 00:32:57.059702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059734  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:32:57.059744  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:32:57.059817  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:32:57.059920  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059938  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:32:57.059953  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:32:57.059984  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:32:57.060042  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060059  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:32:57.060063  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:32:57.060107  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:32:57.060165  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:32:57.261590  619438 provision.go:177] copyRemoteCerts
	I0917 00:32:57.261669  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:32:57.261706  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.282218  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.380298  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:32:57.380375  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:32:57.406100  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:32:57.406164  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0917 00:32:57.431902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:32:57.431973  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:32:57.458627  619438 provision.go:87] duration metric: took 418.658957ms to configureAuth
	I0917 00:32:57.458662  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:32:57.458871  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:32:57.458975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.477933  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:32:57.478176  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0917 00:32:57.478194  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:32:57.778279  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:32:57.778306  619438 machine.go:96] duration metric: took 4.227377039s to provisionDockerMachine
	I0917 00:32:57.778321  619438 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:32:57.778335  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:32:57.778405  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:32:57.778457  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.799370  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:57.898480  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:32:57.902232  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:32:57.902263  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:32:57.902270  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:32:57.902278  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:32:57.902290  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:32:57.902356  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:32:57.902449  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:32:57.902461  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:32:57.902551  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:32:57.912046  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:32:57.938010  619438 start.go:296] duration metric: took 159.669671ms for postStartSetup
	I0917 00:32:57.938093  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:32:57.938130  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:57.958300  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.051975  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:32:58.057124  619438 fix.go:56] duration metric: took 4.817259212s for fixHost
	I0917 00:32:58.057152  619438 start.go:83] releasing machines lock for "ha-671025", held for 4.817316777s
	I0917 00:32:58.057223  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:32:58.076270  619438 ssh_runner.go:195] Run: cat /version.json
	I0917 00:32:58.076324  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.076348  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:32:58.076443  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:32:58.096247  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.097159  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:32:58.262989  619438 ssh_runner.go:195] Run: systemctl --version
	I0917 00:32:58.267773  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:32:58.409261  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:32:58.414211  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.423687  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:32:58.423780  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:32:58.433966  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:32:58.434000  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:32:58.434033  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:32:58.434084  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:32:58.447559  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:32:58.460424  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:32:58.460531  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:32:58.474181  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:32:58.487071  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:32:58.555422  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:32:58.624823  619438 docker.go:234] disabling docker service ...
	I0917 00:32:58.624887  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:32:58.638410  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:32:58.650440  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:32:58.717056  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:32:58.784599  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:32:58.796601  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:32:58.814550  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:32:58.814628  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.825014  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:32:58.825076  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.835600  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.845903  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.856370  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:32:58.866050  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.876375  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.886563  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:32:58.896783  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:32:58.905534  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:32:58.914324  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:58.980288  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:32:59.086529  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:32:59.086607  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:32:59.090665  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:32:59.090717  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:32:59.094291  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:32:59.129626  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:32:59.129717  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.166530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:32:59.205640  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:32:59.206928  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:32:59.224561  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:32:59.228789  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.241758  619438 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:32:59.241920  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:32:59.241988  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.285898  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.285921  619438 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:32:59.285968  619438 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:32:59.321059  619438 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:32:59.321084  619438 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:32:59.321093  619438 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:32:59.321190  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:32:59.321250  619438 ssh_runner.go:195] Run: crio config
	I0917 00:32:59.369526  619438 cni.go:84] Creating CNI manager for ""
	I0917 00:32:59.369549  619438 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 00:32:59.369567  619438 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:32:59.369587  619438 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:32:59.369753  619438 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:32:59.369775  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:32:59.369814  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:32:59.383509  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:32:59.383620  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:32:59.383670  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:32:59.393067  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:32:59.393127  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:32:59.402584  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:32:59.422262  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:32:59.442170  619438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:32:59.461958  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:32:59.481675  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:32:59.485564  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:32:59.497547  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:32:59.561107  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:32:59.583877  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:32:59.583902  619438 certs.go:194] generating shared ca certs ...
	I0917 00:32:59.583919  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:32:59.584079  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:32:59.584130  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:32:59.584138  619438 certs.go:256] generating profile certs ...
	I0917 00:32:59.584206  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:32:59.584231  619438 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6
	I0917 00:32:59.584246  619438 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0917 00:33:00.130871  619438 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 ...
	I0917 00:33:00.130908  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6: {Name:mkf467d0f9030b6e7125c3be410cb9c880d64270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131088  619438 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 ...
	I0917 00:33:00.131108  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6: {Name:mk8b3c4ad94a18f1741ce8fdbeceb16bceee6f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.131220  619438 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:33:00.131404  619438 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.5d6eefc6 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:33:00.131601  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:00.131625  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:00.131643  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:00.131658  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:00.131673  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:00.131687  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:00.131702  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:00.131714  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:00.131729  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:00.131788  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:00.131823  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:00.131830  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:00.131857  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:00.131878  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:00.131897  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:00.131942  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:00.131980  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.132001  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.132015  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.132585  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:00.165089  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:00.198657  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:00.239751  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:00.280419  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:00.317099  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:00.355265  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:00.390225  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:00.418200  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:00.443790  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:00.469778  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:00.495605  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:33:00.516723  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:00.522849  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:00.533838  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538041  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.538112  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:00.545733  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:00.555787  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:00.566338  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570140  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.570203  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:00.577687  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:00.587720  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:00.599252  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603349  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.603456  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:00.611701  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:00.622604  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:00.626359  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:00.633232  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:00.640671  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:00.647926  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:00.655266  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:00.662987  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:00.670413  619438 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:33:00.670534  619438 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:33:00.670583  619438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:33:00.712724  619438 cri.go:89] found id: "dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c"
	I0917 00:33:00.712747  619438 cri.go:89] found id: "c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3"
	I0917 00:33:00.712751  619438 cri.go:89] found id: "3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da"
	I0917 00:33:00.712754  619438 cri.go:89] found id: "3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49"
	I0917 00:33:00.712757  619438 cri.go:89] found id: "feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15"
	I0917 00:33:00.712761  619438 cri.go:89] found id: ""
	I0917 00:33:00.712805  619438 ssh_runner.go:195] Run: sudo runc list -f json
	I0917 00:33:00.733477  619438 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","pid":805,"status":"running","bundle":"/run/containers/storage/overlay-containers/3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49/userdata","rootfs":"/var/lib/containers/storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","created":"2025-09-17T00:33:00.224803069Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.170354801Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7817082b8b3b4ebaac6b1c6cc40fe3e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-671025_a7817082b8b3b4ebaac6b1c6cc40fe3e/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/
storage/overlay/d1bbef73ef376ea943ccf80c23fb8fd4556f886e52e63a59db0627508fb2430b/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aca3020b8c9d03c59812f32aa02323ace09e6b9784e7f9b6eae4976a3eab2f1d","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a781708
2b8b3b4ebaac6b1c6cc40fe3e/containers/kube-vip/367d19bd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.hash":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.seen":"2025-09-17T00:32:59.669171997Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","pid":880,"status":"running","bundle":"/run/containers/
storage/overlay-containers/3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da/userdata","rootfs":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","created":"2025-09-17T00:33:00.275833142Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePa
th\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.202504428Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b5ccb738eb1160dc60c2973028d04964\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-671025_b5ccb738eb1160dc60c2973028d04964/kube-apiserver/1.log","io.kuberne
tes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9b7a3dc090f584f6e4f5509cd9284edde85ace5b420fc8c9f6eae4139c98d2aa/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c0bb4371ed6c8742b2ad9f89d7b5b46fbc83b2b33c92890300a7de93cb2ebbb6","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/containers/kube-ap
iserver/6df491f2\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":f
alse}]","io.kubernetes.pod.name":"kube-apiserver-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b5ccb738eb1160dc60c2973028d04964","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b5ccb738eb1160dc60c2973028d04964","kubernetes.io/config.seen":"2025-09-17T00:32:59.669167256Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","pid":894,"status":"running","bundle":"/run/containers/storage/overlay-containers/c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3/userdata","rootfs":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a
9bffec85a2a35b5e8e008790d2da1/merged","created":"2025-09-17T00:33:00.274952825Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID"
:"c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.203434002Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"74a9cbd6392d4b9acfdd053de2761cb8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-671025_74a9cbd6392d4b9acfdd053de2761cb8/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/064810f36ba8359e1cc403cdd3631d6973a9bffec85
a2a35b5e8e008790d2da1/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d6a7ac1856cbec973e10d8124dc32d2336942aefec9e4e328bba1938afb798a","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/containers/kube
-scheduler/513703c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.hash":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.seen":"2025-09-17T00:32:59.669170685Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","pid":914,"status":"running","bundle":"/run/containers/storage/overlay-contai
ners/dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c/userdata","rootfs":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","created":"2025-09-17T00:33:00.286793858Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/d
ev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.204654096Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8d1e0f98935496199c8e8278a2410d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-671025_8d1e0f98935496199c8e8278a2410d09/kube-c
ontroller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7b172e441c6d71eaa8c8337753bce771b451d1d95369d9d84519996303a3c5c0/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"17b3a59f2d7b6e908cfd321a66c6b87feb6fb4fe0c647bb872c8981c7768653d","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/
etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/containers/kube-controller-manager/7587fc8c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"ho
st_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.hash":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.seen":"2025-09-17T00:32:59.669169006Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.system
d.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","pid":809,"status":"running","bundle":"/run/containers/storage/overlay-containers/feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15/userdata","rootfs":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","created":"2025-09-17T00:33:00.227524758Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\
\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:33:00.156861142Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"629bf94aa
8286a4aae957269fae7c79b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-671025_629bf94aa8286a4aae957269fae7c79b/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0de8b6318aa0eefff40d78b1a2eccd71a123a2f8a8081d228455fb7b3b8e91aa/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ff786868f6409aa327dcae8a4aa518d72def9dcd14446677c7ba027c7a4a57b9","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\
":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/containers/etcd/188c438f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"629bf94aa8286a4aae957269fae7c79b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"629bf94aa8286a4aae957269fae7c79b",
"kubernetes.io/config.seen":"2025-09-17T00:32:59.669161890Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0917 00:33:00.733792  619438 cri.go:126] list returned 5 containers
	I0917 00:33:00.733811  619438 cri.go:129] container: {ID:3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 Status:running}
	I0917 00:33:00.733830  619438 cri.go:135] skipping {3a99a51aacd42b76c5480eccf1b466f783f7987fa530f44abc1aa4a8e2b09c49 running}: state = "running", want "paused"
	I0917 00:33:00.733846  619438 cri.go:129] container: {ID:3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da Status:running}
	I0917 00:33:00.733857  619438 cri.go:135] skipping {3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da running}: state = "running", want "paused"
	I0917 00:33:00.733867  619438 cri.go:129] container: {ID:c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 Status:running}
	I0917 00:33:00.733875  619438 cri.go:135] skipping {c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3 running}: state = "running", want "paused"
	I0917 00:33:00.733884  619438 cri.go:129] container: {ID:dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c Status:running}
	I0917 00:33:00.733891  619438 cri.go:135] skipping {dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c running}: state = "running", want "paused"
	I0917 00:33:00.733906  619438 cri.go:129] container: {ID:feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 Status:running}
	I0917 00:33:00.733915  619438 cri.go:135] skipping {feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15 running}: state = "running", want "paused"
	I0917 00:33:00.733967  619438 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:33:00.743818  619438 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:33:00.743842  619438 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:33:00.743896  619438 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:33:00.753049  619438 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:00.753478  619438 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671025" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.753570  619438 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-517646/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671025" cluster setting kubeconfig missing "ha-671025" context setting]
	I0917 00:33:00.753860  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.754368  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:33:00.754887  619438 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:33:00.754902  619438 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:33:00.754906  619438 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:33:00.754911  619438 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:33:00.754914  619438 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:33:00.754984  619438 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:33:00.755286  619438 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:33:00.764691  619438 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:33:00.764721  619438 kubeadm.go:593] duration metric: took 20.872209ms to restartPrimaryControlPlane
	I0917 00:33:00.764732  619438 kubeadm.go:394] duration metric: took 94.344936ms to StartCluster
	I0917 00:33:00.764754  619438 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.764829  619438 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:33:00.765434  619438 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:00.765678  619438 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:00.765703  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:00.765712  619438 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:33:00.765954  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.768475  619438 out.go:179] * Enabled addons: 
	I0917 00:33:00.769396  619438 addons.go:514] duration metric: took 3.672053ms for enable addons: enabled=[]
	I0917 00:33:00.769427  619438 start.go:246] waiting for cluster config update ...
	I0917 00:33:00.769435  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:00.770640  619438 out.go:203] 
	I0917 00:33:00.771782  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:00.771882  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.773295  619438 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:33:00.774266  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:00.775272  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:00.776246  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:00.776270  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:00.776303  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:00.776369  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:00.776383  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:00.776522  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:00.798181  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:00.798201  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:00.798221  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:00.798259  619438 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:00.798335  619438 start.go:364] duration metric: took 52.828µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:33:00.798366  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:00.798404  619438 fix.go:54] fixHost starting: m02
	I0917 00:33:00.798630  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:00.816952  619438 fix.go:112] recreateIfNeeded on ha-671025-m02: state=Stopped err=<nil>
	W0917 00:33:00.816988  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:00.818588  619438 out.go:252] * Restarting existing docker container for "ha-671025-m02" ...
	I0917 00:33:00.818663  619438 cli_runner.go:164] Run: docker start ha-671025-m02
	I0917 00:33:01.089289  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:33:01.112171  619438 kic.go:430] container "ha-671025-m02" state is running.
	I0917 00:33:01.112607  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:01.134692  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:01.134992  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:01.135064  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:01.156210  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:01.156564  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:01.156582  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:01.157427  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34164->127.0.0.1:33183: read: connection reset by peer
	I0917 00:33:04.296769  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.296809  619438 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:33:04.296905  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.315073  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.315310  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.315323  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:33:04.466025  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:33:04.466110  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.484268  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.484535  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.484554  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:04.621439  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:04.621482  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:04.621501  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:04.621511  619438 provision.go:84] configureAuth start
	I0917 00:33:04.621573  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:04.640283  619438 provision.go:143] copyHostCerts
	I0917 00:33:04.640335  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640368  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:04.640383  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:04.640480  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:04.640601  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640634  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:04.640652  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:04.640698  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:04.640784  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640809  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:04.640818  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:04.640852  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:04.640942  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:33:04.733693  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:04.733759  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:04.733809  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.752499  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:04.850462  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:04.850518  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:04.876387  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:04.876625  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:04.904017  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:04.904091  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:04.932067  619438 provision.go:87] duration metric: took 310.54132ms to configureAuth
	I0917 00:33:04.932114  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:04.932333  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:04.932519  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:04.950911  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:04.951173  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0917 00:33:04.951192  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:13.583717  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:13.583742  619438 machine.go:96] duration metric: took 12.448736712s to provisionDockerMachine
	I0917 00:33:13.583754  619438 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:33:13.583768  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:13.583844  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:13.583889  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.602374  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.704271  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:13.709862  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:13.709910  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:13.709921  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:13.709930  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:13.709945  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:13.710027  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:13.710128  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:13.710138  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:13.710258  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:13.726542  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:13.762021  619438 start.go:296] duration metric: took 178.248287ms for postStartSetup
	I0917 00:33:13.762146  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:13.762202  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.785807  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.885926  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:13.890781  619438 fix.go:56] duration metric: took 13.092394555s for fixHost
	I0917 00:33:13.890814  619438 start.go:83] releasing machines lock for "ha-671025-m02", held for 13.092464098s
	I0917 00:33:13.890888  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:33:13.912194  619438 out.go:179] * Found network options:
	I0917 00:33:13.913617  619438 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:33:13.914820  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:13.914864  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:13.914934  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:13.914975  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.915050  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:13.915121  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:33:13.935804  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:13.936030  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:33:14.188511  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:14.195453  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.211117  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:14.211201  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:14.227642  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:14.227708  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:14.227849  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:14.227922  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:14.251293  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:14.271238  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:14.271313  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:14.288904  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:14.307961  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:14.437900  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:14.545190  619438 docker.go:234] disabling docker service ...
	I0917 00:33:14.545281  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:14.560872  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:14.573584  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:14.680197  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:14.811100  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:14.825885  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:14.847059  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:14.847127  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.859808  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:14.859899  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.871797  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.883328  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.896664  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:14.907675  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.918906  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.929358  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:14.941273  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:14.953043  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:14.967648  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:15.083218  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:21.777437  619438 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.694178293s)
	I0917 00:33:21.777485  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:21.777539  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:21.781615  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:21.781681  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:21.785837  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:21.828119  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:21.828217  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.874252  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:21.916319  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:21.917788  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:21.918929  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:21.938354  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:21.942655  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:21.956120  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:21.956460  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:21.956800  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:21.976493  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:21.976752  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:33:21.976765  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:21.976779  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:21.976919  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:21.976970  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:21.976980  619438 certs.go:256] generating profile certs ...
	I0917 00:33:21.977105  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:21.977160  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.289f7349
	I0917 00:33:21.977201  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:21.977214  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:21.977226  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:21.977238  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:21.977248  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:21.977263  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:21.977277  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:21.977292  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:21.977304  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:21.977348  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:21.977374  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:21.977384  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:21.977437  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:21.977468  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:21.977488  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:21.977537  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:21.977566  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:21.977579  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:21.977591  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:21.977641  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:21.996033  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:22.086756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:22.091430  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:22.105578  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:22.109474  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:22.123413  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:22.127015  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:22.140675  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:22.145374  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:22.160202  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:22.164648  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:22.179040  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:22.182820  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:22.197252  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:22.226621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:22.255420  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:22.284497  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:22.313100  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:22.339570  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:22.368270  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:22.395836  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:22.424911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:22.451321  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:22.479698  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:22.509017  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:22.530192  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:22.550277  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:22.570982  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:22.591763  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:22.615610  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:22.637548  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:22.660728  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:22.668525  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:22.679921  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684865  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.684929  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:22.692513  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:22.703651  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:22.716758  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721573  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.721639  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:22.729408  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:22.740799  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:22.754481  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759515  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.759591  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:22.769873  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:22.780940  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:22.785123  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:22.792739  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:22.800305  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:22.808094  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:22.815985  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:22.823772  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:22.830968  619438 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:33:22.831108  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:22.831135  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:22.831174  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:22.845445  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:22.845549  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:22.845617  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:22.856831  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:22.856928  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:22.867889  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:22.888469  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:22.908498  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:22.929249  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:22.933575  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:22.945785  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.049186  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.063035  619438 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:23.063337  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.065109  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:23.066721  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:23.162455  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:23.176145  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:23.176215  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:23.176479  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185303  619438 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:33:23.185333  619438 node_ready.go:38] duration metric: took 8.819618ms for node "ha-671025-m02" to be "Ready" ...
	I0917 00:33:23.185350  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:23.185420  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:23.197637  619438 api_server.go:72] duration metric: took 134.535244ms to wait for apiserver process to appear ...
	I0917 00:33:23.197672  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:23.197693  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:23.202879  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:23.204114  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:23.204224  619438 api_server.go:131] duration metric: took 6.534103ms to wait for apiserver health ...
	I0917 00:33:23.204244  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:23.211681  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:23.211742  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211758  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.211769  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.211777  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.211783  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.211792  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.211798  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.211807  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.211816  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.211822  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.211829  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.211836  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.211844  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.211850  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.211859  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.211867  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.211875  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.211881  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.211888  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.211896  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.211902  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.211907  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.211913  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.211919  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.211928  619438 system_pods.go:74] duration metric: took 7.670911ms to wait for pod list to return data ...
	I0917 00:33:23.211942  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:23.215282  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:23.215305  619438 default_sa.go:55] duration metric: took 3.354164ms for default service account to be created ...
	I0917 00:33:23.215314  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:23.220686  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:23.220721  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220730  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:23.220737  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:23.220741  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:23.220745  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:33:23.220750  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0917 00:33:23.220753  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:23.220759  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:23.220763  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:23.220768  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:23.220771  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:33:23.220774  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:23.220778  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:23.220782  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:33:23.220786  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 00:33:23.220790  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:23.220793  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:23.220796  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:23.220800  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:23.220803  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:33:23.220806  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:23.220808  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:23.220812  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:23.220816  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:23.220822  619438 system_pods.go:126] duration metric: took 5.503704ms to wait for k8s-apps to be running ...
	I0917 00:33:23.220830  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:23.220878  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:23.233344  619438 system_svc.go:56] duration metric: took 12.501522ms WaitForService to wait for kubelet
	I0917 00:33:23.233378  619438 kubeadm.go:578] duration metric: took 170.282ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:23.233426  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:23.237203  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237235  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237249  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237253  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237258  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:23.237263  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:23.237268  619438 node_conditions.go:105] duration metric: took 3.836923ms to run NodePressure ...
	I0917 00:33:23.237281  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:23.237310  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:23.239362  619438 out.go:203] 
	I0917 00:33:23.240662  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:23.240787  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.242255  619438 out.go:179] * Starting "ha-671025-m03" control-plane node in "ha-671025" cluster
	I0917 00:33:23.243650  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:23.244785  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:23.245985  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:23.246015  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:23.246076  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:23.246103  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:23.246111  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:23.246237  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.267677  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:23.267698  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:23.267719  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:23.267746  619438 start.go:360] acquireMachinesLock for ha-671025-m03: {Name:mk60ae20c28e89b2af34eaf4825fcf2e756b9f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:23.267801  619438 start.go:364] duration metric: took 38.266µs to acquireMachinesLock for "ha-671025-m03"
	I0917 00:33:23.267818  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:23.267825  619438 fix.go:54] fixHost starting: m03
	I0917 00:33:23.268049  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.286470  619438 fix.go:112] recreateIfNeeded on ha-671025-m03: state=Stopped err=<nil>
	W0917 00:33:23.286501  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:23.288337  619438 out.go:252] * Restarting existing docker container for "ha-671025-m03" ...
	I0917 00:33:23.288444  619438 cli_runner.go:164] Run: docker start ha-671025-m03
	I0917 00:33:23.539232  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m03 --format={{.State.Status}}
	I0917 00:33:23.559852  619438 kic.go:430] container "ha-671025-m03" state is running.
	I0917 00:33:23.560281  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:23.582181  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:23.582448  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:23.582512  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:23.603240  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:23.603508  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:23.603524  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:23.604268  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54628->127.0.0.1:33188: read: connection reset by peer
	I0917 00:33:26.756053  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.756095  619438 ubuntu.go:182] provisioning hostname "ha-671025-m03"
	I0917 00:33:26.756163  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.775553  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.775816  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.775832  619438 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m03 && echo "ha-671025-m03" | sudo tee /etc/hostname
	I0917 00:33:26.929724  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m03
	
	I0917 00:33:26.929811  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:26.948952  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:26.949181  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:26.949199  619438 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:33:27.097686  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:33:27.097724  619438 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:33:27.097808  619438 ubuntu.go:190] setting up certificates
	I0917 00:33:27.097838  619438 provision.go:84] configureAuth start
	I0917 00:33:27.097905  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:27.124607  619438 provision.go:143] copyHostCerts
	I0917 00:33:27.124661  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124704  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:33:27.124712  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:33:27.124796  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:33:27.124902  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124927  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:33:27.124938  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:33:27.124978  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:33:27.125071  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125093  619438 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:33:27.125097  619438 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:33:27.125123  619438 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:33:27.125202  619438 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m03 san=[127.0.0.1 192.168.49.4 ha-671025-m03 localhost minikube]
	I0917 00:33:27.491028  619438 provision.go:177] copyRemoteCerts
	I0917 00:33:27.491103  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:33:27.491153  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.510894  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:27.621913  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:33:27.621991  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:33:27.659332  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:33:27.659436  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:33:27.694265  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:33:27.694331  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:33:27.729012  619438 provision.go:87] duration metric: took 631.150589ms to configureAuth
	I0917 00:33:27.729044  619438 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:33:27.729332  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:27.729498  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:27.752375  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:27.752667  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0917 00:33:27.752694  619438 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:33:28.163571  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:33:28.163606  619438 machine.go:96] duration metric: took 4.581141061s to provisionDockerMachine
	I0917 00:33:28.163625  619438 start.go:293] postStartSetup for "ha-671025-m03" (driver="docker")
	I0917 00:33:28.163636  619438 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:33:28.163694  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:33:28.163736  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.183221  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.282370  619438 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:33:28.286033  619438 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:33:28.286069  619438 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:33:28.286080  619438 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:33:28.286089  619438 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:33:28.286103  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:33:28.286167  619438 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:33:28.286260  619438 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:33:28.286273  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:33:28.286385  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:33:28.296210  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:28.323607  619438 start.go:296] duration metric: took 159.96344ms for postStartSetup
	I0917 00:33:28.323744  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:33:28.323801  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.341948  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.437100  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:33:28.442217  619438 fix.go:56] duration metric: took 5.174381535s for fixHost
	I0917 00:33:28.442251  619438 start.go:83] releasing machines lock for "ha-671025-m03", held for 5.17444003s
	I0917 00:33:28.442339  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m03
	I0917 00:33:28.462490  619438 out.go:179] * Found network options:
	I0917 00:33:28.463995  619438 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0917 00:33:28.465339  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465379  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465437  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:33:28.465456  619438 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:33:28.465540  619438 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:33:28.465604  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.465608  619438 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:33:28.465666  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m03
	I0917 00:33:28.484618  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.484954  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m03/id_rsa Username:docker}
	I0917 00:33:28.729938  619438 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:33:28.735367  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.746253  619438 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:33:28.746345  619438 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:33:28.757317  619438 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:33:28.757344  619438 start.go:495] detecting cgroup driver to use...
	I0917 00:33:28.757382  619438 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:33:28.757457  619438 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:33:28.772308  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:33:28.784900  619438 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:33:28.784967  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:33:28.800003  619438 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:33:28.812730  619438 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:33:28.927855  619438 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:33:29.059441  619438 docker.go:234] disabling docker service ...
	I0917 00:33:29.059519  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:33:29.078537  619438 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:33:29.093278  619438 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:33:29.210953  619438 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:33:29.324946  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:33:29.337107  619438 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:33:29.355136  619438 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:33:29.355186  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.366142  619438 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:33:29.366211  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.378355  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.389105  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.399699  619438 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:33:29.409712  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.420697  619438 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.430508  619438 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:33:29.440921  619438 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:33:29.450466  619438 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:33:29.459577  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:29.574875  619438 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:33:29.816990  619438 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:33:29.817095  619438 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:33:29.821723  619438 start.go:563] Will wait 60s for crictl version
	I0917 00:33:29.821780  619438 ssh_runner.go:195] Run: which crictl
	I0917 00:33:29.825613  619438 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:33:29.861449  619438 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:33:29.861530  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.917974  619438 ssh_runner.go:195] Run: crio --version
	I0917 00:33:29.959407  619438 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:33:29.960768  619438 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:33:29.962037  619438 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0917 00:33:29.963347  619438 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:33:29.990529  619438 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:33:29.995062  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.007594  619438 mustload.go:65] Loading cluster: ha-671025
	I0917 00:33:30.007810  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:30.008007  619438 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:33:30.028172  619438 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:33:30.028488  619438 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.4
	I0917 00:33:30.028502  619438 certs.go:194] generating shared ca certs ...
	I0917 00:33:30.028518  619438 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:33:30.028667  619438 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:33:30.028724  619438 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:33:30.028738  619438 certs.go:256] generating profile certs ...
	I0917 00:33:30.028835  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:33:30.028918  619438 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.bb6f0fe7
	I0917 00:33:30.028969  619438 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:33:30.028985  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:33:30.029006  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:33:30.029022  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:33:30.029039  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:33:30.029053  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:33:30.029066  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:33:30.029085  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:33:30.029109  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:33:30.029181  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:33:30.029228  619438 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:33:30.029241  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:33:30.029285  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:33:30.029320  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:33:30.029350  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:33:30.029418  619438 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:33:30.029458  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.029480  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.029497  619438 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.029570  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:33:30.048859  619438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:33:30.137756  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:33:30.142385  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:33:30.157058  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:33:30.161473  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:33:30.176759  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:33:30.180509  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:33:30.193674  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:33:30.197197  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:33:30.210232  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:33:30.214138  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:33:30.227500  619438 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:33:30.231351  619438 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:33:30.244274  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:33:30.271911  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:33:30.299112  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:33:30.326476  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:33:30.352993  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0917 00:33:30.380621  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 00:33:30.406324  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:33:30.432139  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:33:30.458308  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:33:30.483817  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:33:30.509827  619438 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:33:30.537659  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:33:30.557593  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:33:30.577579  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:33:30.597023  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:33:30.617353  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:33:30.636531  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:33:30.656268  619438 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:33:30.676462  619438 ssh_runner.go:195] Run: openssl version
	I0917 00:33:30.682486  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:33:30.693023  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696932  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.696986  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:33:30.704184  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:33:30.714256  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:33:30.725254  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.728941  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.729013  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:33:30.736673  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:33:30.746358  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:33:30.757231  619438 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761269  619438 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.761351  619438 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:33:30.768689  619438 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:33:30.779054  619438 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:33:30.783069  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:33:30.790436  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:33:30.797491  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:33:30.804684  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:33:30.811602  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:33:30.818603  619438 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:33:30.825614  619438 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0917 00:33:30.825731  619438 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:33:30.825755  619438 kube-vip.go:115] generating kube-vip config ...
	I0917 00:33:30.825793  619438 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:33:30.839517  619438 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:33:30.839587  619438 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:33:30.839637  619438 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:33:30.849197  619438 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:33:30.849283  619438 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:33:30.859805  619438 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:33:30.879168  619438 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:33:30.898461  619438 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:33:30.918131  619438 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:33:30.922054  619438 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:33:30.934606  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.047135  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.060828  619438 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:33:31.061141  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.063169  619438 out.go:179] * Verifying Kubernetes components...
	I0917 00:33:31.064429  619438 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:33:31.179306  619438 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:33:31.194472  619438 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:33:31.194609  619438 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:33:31.194890  619438 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198458  619438 node_ready.go:49] node "ha-671025-m03" is "Ready"
	I0917 00:33:31.198488  619438 node_ready.go:38] duration metric: took 3.579476ms for node "ha-671025-m03" to be "Ready" ...
	I0917 00:33:31.198503  619438 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:33:31.198550  619438 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:33:31.212138  619438 api_server.go:72] duration metric: took 151.254038ms to wait for apiserver process to appear ...
	I0917 00:33:31.212172  619438 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:33:31.212199  619438 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:33:31.217814  619438 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:33:31.218774  619438 api_server.go:141] control plane version: v1.34.0
	I0917 00:33:31.218795  619438 api_server.go:131] duration metric: took 6.616763ms to wait for apiserver health ...
	I0917 00:33:31.218803  619438 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:33:31.225098  619438 system_pods.go:59] 24 kube-system pods found
	I0917 00:33:31.225134  619438 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225141  619438 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.225149  619438 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.225155  619438 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.225163  619438 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.225168  619438 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.225177  619438 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.225185  619438 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.225190  619438 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.225199  619438 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.225205  619438 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.225209  619438 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.225213  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.225219  619438 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.225225  619438 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.225228  619438 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.225231  619438 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.225235  619438 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.225237  619438 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.225242  619438 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.225247  619438 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.225250  619438 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.225253  619438 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.225255  619438 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.225261  619438 system_pods.go:74] duration metric: took 6.452715ms to wait for pod list to return data ...
	I0917 00:33:31.225280  619438 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:33:31.228376  619438 default_sa.go:45] found service account: "default"
	I0917 00:33:31.228411  619438 default_sa.go:55] duration metric: took 3.119992ms for default service account to be created ...
	I0917 00:33:31.228422  619438 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:33:31.233445  619438 system_pods.go:86] 24 kube-system pods found
	I0917 00:33:31.233478  619438 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233487  619438 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:33:31.233491  619438 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running
	I0917 00:33:31.233495  619438 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running
	I0917 00:33:31.233501  619438 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:33:31.233504  619438 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:33:31.233508  619438 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:33:31.233511  619438 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:33:31.233517  619438 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running
	I0917 00:33:31.233523  619438 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running
	I0917 00:33:31.233529  619438 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:33:31.233535  619438 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running
	I0917 00:33:31.233540  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running
	I0917 00:33:31.233548  619438 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:33:31.233555  619438 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:33:31.233559  619438 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:33:31.233566  619438 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:33:31.233570  619438 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running
	I0917 00:33:31.233576  619438 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running
	I0917 00:33:31.233581  619438 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:33:31.233587  619438 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:33:31.233590  619438 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:33:31.233596  619438 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:33:31.233599  619438 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:33:31.233605  619438 system_pods.go:126] duration metric: took 5.178303ms to wait for k8s-apps to be running ...
	I0917 00:33:31.233615  619438 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:33:31.233661  619438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:33:31.246667  619438 system_svc.go:56] duration metric: took 13.0386ms WaitForService to wait for kubelet
	I0917 00:33:31.246701  619438 kubeadm.go:578] duration metric: took 185.824043ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:33:31.246730  619438 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:33:31.250636  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250665  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250679  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250684  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250690  619438 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:33:31.250694  619438 node_conditions.go:123] node cpu capacity is 8
	I0917 00:33:31.250700  619438 node_conditions.go:105] duration metric: took 3.96358ms to run NodePressure ...
	I0917 00:33:31.250716  619438 start.go:241] waiting for startup goroutines ...
	I0917 00:33:31.250743  619438 start.go:255] writing updated cluster config ...
	I0917 00:33:31.253191  619438 out.go:203] 
	I0917 00:33:31.255560  619438 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:33:31.255716  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.257849  619438 out.go:179] * Starting "ha-671025-m04" worker node in "ha-671025" cluster
	I0917 00:33:31.259401  619438 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:33:31.260716  619438 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:33:31.262230  619438 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:33:31.262264  619438 cache.go:58] Caching tarball of preloaded images
	I0917 00:33:31.262330  619438 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:33:31.262386  619438 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:33:31.262432  619438 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:33:31.262581  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.285684  619438 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:33:31.285706  619438 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:33:31.285722  619438 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:33:31.285751  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:33:31.285824  619438 start.go:364] duration metric: took 55.532µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:33:31.285843  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:33:31.285851  619438 fix.go:54] fixHost starting: m04
	I0917 00:33:31.286063  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.305028  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:33:31.305061  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:33:31.307579  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:33:31.307671  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:33:31.575879  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:33:31.595646  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:33:31.596093  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:33:31.616747  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:33:31.617092  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:33:31.617170  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:33:31.636573  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:33:31.636791  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0917 00:33:31.636802  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:33:31.637630  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36226->127.0.0.1:33193: read: connection reset by peer
	I0917 00:33:34.638709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:37.640910  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:40.643532  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:43.644441  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:46.646832  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:49.647727  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:52.649735  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:55.650690  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:33:58.651030  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:01.651344  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:04.652841  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:07.653174  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:10.655161  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:13.656284  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:16.658064  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:19.658720  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:22.660831  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:25.661743  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:28.662460  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:31.663366  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:34.664358  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:37.666715  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:40.668752  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:43.669135  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:46.670730  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:49.671672  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:52.673038  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:55.674872  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:34:58.675353  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:01.676728  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:04.677624  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:07.680078  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:10.681718  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:13.682700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:16.684701  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:19.686235  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:22.687651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:25.689778  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:28.690485  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:31.691549  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:34.692838  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:37.695306  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:40.697845  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:43.698429  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:46.700789  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:49.701639  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:52.702370  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:55.704673  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:35:58.705496  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:01.706733  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:04.708175  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:07.709697  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:10.712190  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:13.713347  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:16.715721  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:19.716893  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:22.718572  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:25.720700  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:28.721777  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33193: connect: connection refused
	I0917 00:36:31.722479  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:36:31.722518  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:36:31.722607  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.744520  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.744620  619438 machine.go:96] duration metric: took 3m0.127509973s to provisionDockerMachine
	I0917 00:36:31.744723  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:36:31.744770  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:31.764601  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:31.764736  619438 retry.go:31] will retry after 288.945807ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.054420  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.074595  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.074728  619438 retry.go:31] will retry after 272.369407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:32.348309  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:32.368462  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:32.368608  619438 retry.go:31] will retry after 744.516266ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.113868  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.133032  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.133163  619438 retry.go:31] will retry after 492.951246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.626619  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.647357  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:33.647505  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:33.647528  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.647587  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:36:33.647631  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.666215  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.666338  619438 retry.go:31] will retry after 272.675779ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:33.939657  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:33.958470  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:33.958588  619438 retry.go:31] will retry after 525.446207ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:34.484331  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:34.504346  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:36:34.504492  619438 retry.go:31] will retry after 588.594219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.093370  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:36:35.116893  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:36:35.117042  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117086  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117113  619438 fix.go:56] duration metric: took 3m3.831261756s for fixHost
	I0917 00:36:35.117126  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.831291336s
	W0917 00:36:35.117142  619438 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:36:35.117240  619438 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:36:35.117254  619438 start.go:729] Will try again in 5 seconds ...
	I0917 00:36:40.118524  619438 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:36:40.118656  619438 start.go:364] duration metric: took 88.188µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:36:40.118689  619438 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:36:40.118698  619438 fix.go:54] fixHost starting: m04
	I0917 00:36:40.119106  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.139538  619438 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:36:40.139579  619438 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:36:40.141549  619438 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:36:40.141624  619438 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:36:40.412862  619438 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:36:40.433322  619438 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:36:40.433799  619438 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:36:40.453513  619438 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:36:40.453934  619438 machine.go:93] provisionDockerMachine start ...
	I0917 00:36:40.454059  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:36:40.473978  619438 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:40.474315  619438 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0917 00:36:40.474331  619438 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:36:40.475099  619438 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33606->127.0.0.1:33198: read: connection reset by peer
	I0917 00:36:43.475724  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:46.476660  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:49.478345  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:52.479547  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:55.482132  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:36:58.483337  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:01.484607  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:04.485839  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:07.487714  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:10.489661  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:13.490227  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:16.492090  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:19.492645  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:22.493651  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:25.495677  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:28.496275  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:31.497224  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:34.497736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:37.499709  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:40.502218  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:43.502692  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:46.504930  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:49.506113  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:52.506643  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:55.507569  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:37:58.507989  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:01.508674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:04.509297  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:07.511674  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:10.512110  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:13.512683  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:16.515058  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:19.516277  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:22.517225  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:25.519308  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:28.519717  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:31.520615  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:34.522114  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:37.523670  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:40.526331  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:43.527374  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:46.529741  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:49.531301  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:52.532585  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:55.533793  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:38:58.534231  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:01.534621  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:04.536103  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:07.538458  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:10.540484  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:13.541711  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:16.543992  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:19.545340  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:22.546576  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:25.548676  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:28.549734  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:31.550736  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:34.551691  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:37.553774  619438 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:33198: connect: connection refused
	I0917 00:39:40.555606  619438 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:39:40.555645  619438 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:39:40.555731  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.576194  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.576295  619438 machine.go:96] duration metric: took 3m0.122321612s to provisionDockerMachine
	I0917 00:39:40.576379  619438 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:39:40.576440  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.595844  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.595977  619438 retry.go:31] will retry after 334.138339ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:40.931319  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:40.951370  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:40.951504  619438 retry.go:31] will retry after 347.147392ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.299070  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.319717  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.319850  619438 retry.go:31] will retry after 612.672267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.933618  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.954663  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:41.954778  619438 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:41.954797  619438 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:41.954845  619438 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:39:41.954878  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:41.975511  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:41.975621  619438 retry.go:31] will retry after 279.089961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.255093  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.275630  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.275759  619438 retry.go:31] will retry after 427.799265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:42.704460  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:42.723085  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	I0917 00:39:42.723291  619438 retry.go:31] will retry after 748.226264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.472625  619438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	W0917 00:39:43.493097  619438 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04 returned with exit code 1
	W0917 00:39:43.493238  619438 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.493260  619438 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.493279  619438 fix.go:56] duration metric: took 3m3.3745821s for fixHost
	I0917 00:39:43.493294  619438 start.go:83] releasing machines lock for "ha-671025-m04", held for 3m3.374622198s
	W0917 00:39:43.493451  619438 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-671025" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0917 00:39:43.495244  619438 out.go:203] 
	W0917 00:39:43.496536  619438 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0917 00:39:43.496558  619438 out.go:285] * 
	W0917 00:39:43.498254  619438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 00:39:43.499426  619438 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 00:33:14 ha-671025 crio[565]: time="2025-09-17 00:33:14.250668570Z" level=info msg="Started container" PID=1371 containerID=0a6ec806f09b0ec6cd3c05e4e3ae47a201470e8dd91c163a0a50e778942c1fdf description=kube-system/coredns-66bc5c9577-vfj56/coredns id=e249fce6-f4cd-4113-83e0-50d04adcc10f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b722ecf2f3e80164bf38e495945b2f9de2da062098248c531372f1254b04027
	Sep 17 00:33:14 ha-671025 crio[565]: time="2025-09-17 00:33:14.254529988Z" level=info msg="Started container" PID=1357 containerID=0f6f22dfaf3f5c42ab834fbdacc268222b9381892b372e6c6777b8cdc48ae94d description=kube-system/kube-proxy-f58dt/kube-proxy id=a0f2eb2e-8af2-4dfd-a58a-1737b5f99d21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86370afe3da8daa2b358bfa93e3418e66144d35d035fed0a638a50924fa59408
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.753340587Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758517303Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758557932Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.758575572Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.764982577Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.765047831Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.765068425Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769374951Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769549150Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.769575818Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.773978219Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:33:24 ha-671025 crio[565]: time="2025-09-17 00:33:24.774011909Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.807516826Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ed976c02-d574-4c82-bfc5-c9beb8325877 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.807738230Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ed976c02-d574-4c82-bfc5-c9beb8325877 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.808425117Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f7b84450-4a24-4619-b6df-a4e028fc709d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.808644322Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f7b84450-4a24-4619-b6df-a4e028fc709d name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.809516747Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f7135108-062d-4210-941f-2121b4150437 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.809630183Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.824058373Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e4482785ad19323f369936fbb4daa43031f78405e411d03a635704ce0b9bfa42/merged/etc/passwd: no such file or directory"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.824101095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e4482785ad19323f369936fbb4daa43031f78405e411d03a635704ce0b9bfa42/merged/etc/group: no such file or directory"
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.883592079Z" level=info msg="Created container ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9: kube-system/storage-provisioner/storage-provisioner" id=f7135108-062d-4210-941f-2121b4150437 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.884330281Z" level=info msg="Starting container: ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9" id=e3034afa-a009-4659-9e70-4826d4a036d3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:33:44 ha-671025 crio[565]: time="2025-09-17 00:33:44.892093157Z" level=info msg="Started container" PID=1755 containerID=ecf22eec472717336b0fb89198d6c0b167e76973e6e3cd230dd0afcde977a9a9 description=kube-system/storage-provisioner/storage-provisioner id=e3034afa-a009-4659-9e70-4826d4a036d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84705f66b6f00fabea4a34fd2340cb783d9fd23e696a1d70dfe64392537e0e17
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecf22eec47271       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       3                   84705f66b6f00       storage-provisioner
	0a6ec806f09b0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   3b722ecf2f3e8       coredns-66bc5c9577-vfj56
	911039394b566       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   0d31993e30b9d       busybox-7b57f96db7-wj4r5
	0f6f22dfaf3f5       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   6 minutes ago       Running             kube-proxy                1                   86370afe3da8d       kube-proxy-f58dt
	d8a3a53722ee7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               1                   573be4d17bc4c       kindnet-9zvhz
	79c32235f9c36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       2                   84705f66b6f00       storage-provisioner
	1151cd93da2ad       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   4c29d74d630f3       coredns-66bc5c9577-mqh24
	dd21b88addb23       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   7 minutes ago       Running             kube-controller-manager   1                   17b3a59f2d7b6       kube-controller-manager-ha-671025
	c7b95b9bb5f9d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   7 minutes ago       Running             kube-scheduler            1                   0d6a7ac1856cb       kube-scheduler-ha-671025
	3fa5cc179a477       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   7 minutes ago       Running             kube-apiserver            1                   c0bb4371ed6c8       kube-apiserver-ha-671025
	3a99a51aacd42       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   7 minutes ago       Running             kube-vip                  0                   aca3020b8c9d0       kube-vip-ha-671025
	feb54ecd21790       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 minutes ago       Running             etcd                      1                   ff786868f6409       etcd-ha-671025
	
	
	==> coredns [0a6ec806f09b0ec6cd3c05e4e3ae47a201470e8dd91c163a0a50e778942c1fdf] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41081 - 22204 "HINFO IN 3438997292128027948.7850884943177890662. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020285532s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [1151cd93da2add1289085967f6fd11dca725fe05835ee8882364ce8ef4d5c1d9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34114 - 63412 "HINFO IN 8932016049737155266.1565975528977438817. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04450606s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:40:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:38:49 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ed2fe35b45d401da396432da19b49e7
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  Starting                 6m46s                kube-proxy       
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)    kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                11m                  kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           11m                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           8m39s                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  Starting                 7m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m2s (x8 over 7m2s)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m2s (x8 over 7m2s)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m2s (x8 over 7m2s)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m46s                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           6m46s                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           6m32s                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:40:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:33:22 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 34a83f19fcce42489e31c52ddb1f71d8
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  NodeHasNoDiskPressure    8m45s (x8 over 8m45s)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m45s (x8 over 8m45s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m45s (x8 over 8m45s)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m39s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  Starting                 7m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m (x8 over 7m)        kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m (x8 over 7m)        kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m (x8 over 7m)        kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m46s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           6m46s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           6m32s                  node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [feb54ecd21790065a6ac453e4ff208898c905c70ebfc8b861ab8365f42e7ee15] <==
	{"level":"info","ts":"2025-09-17T00:33:24.731280Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:33:25.373568Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"58f1161d61ce118","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:33:25.373669Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"58f1161d61ce118","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-17T00:39:49.658319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:39:49.685926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39528","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:39:49.695161Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 13140772435598162251)"}
	{"level":"info","ts":"2025-09-17T00:39:49.698056Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"58f1161d61ce118","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-17T00:39:49.698118Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698201Z","caller":"etcdserver/server.go:718","msg":"rejected Raft message from removed member","local-member-id":"aec36adc501070cc","removed-member-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698275Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698272Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.698300Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698547Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.698622Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.698655Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698795Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","error":"context canceled"}
	{"level":"warn","ts":"2025-09-17T00:39:49.698837Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"58f1161d61ce118","error":"failed to read 58f1161d61ce118 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-17T00:39:49.698865Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.699000Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118","error":"context canceled"}
	{"level":"info","ts":"2025-09-17T00:39:49.699036Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.699045Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.699059Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"58f1161d61ce118"}
	{"level":"info","ts":"2025-09-17T00:39:49.699122Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"58f1161d61ce118"}
	{"level":"warn","ts":"2025-09-17T00:39:49.706432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:39:49.706719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46134","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:40:01 up  3:22,  0 users,  load average: 0.44, 0.63, 3.17
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d8a3a53722ee71de725c2794a050878da7894fbc523bb6bac8efe7e38865e48e] <==
	I0917 00:39:14.752917       1 main.go:301] handling current node
	I0917 00:39:14.752930       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:14.752934       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:24.755944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:24.755981       1 main.go:301] handling current node
	I0917 00:39:24.755998       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:24.756003       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:24.756183       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:24.756192       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:34.760510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:34.760554       1 main.go:301] handling current node
	I0917 00:39:34.760573       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:34.760579       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:34.760773       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:34.760789       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:44.756468       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:44.756507       1 main.go:301] handling current node
	I0917 00:39:44.756526       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:44.756532       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:39:44.756690       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0917 00:39:44.756700       1 main.go:324] Node ha-671025-m03 has CIDR [10.244.2.0/24] 
	I0917 00:39:54.752279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:39:54.752346       1 main.go:301] handling current node
	I0917 00:39:54.752365       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:39:54.752371       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3fa5cc179a477659367fd100adcdc1e4e58f2184457c9b340163caae4aaa13da] <==
	I0917 00:33:12.203011       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:33:12.204560       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 00:33:12.215378       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0917 00:33:12.225713       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0917 00:33:12.225748       1 policy_source.go:240] refreshing policies
	E0917 00:33:12.257458       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 00:33:12.275512       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 00:33:13.102620       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 00:33:13.467644       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0917 00:33:13.469377       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:33:13.475334       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 00:33:13.710304       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 00:33:15.400126       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:33:15.451962       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:33:15.550108       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 00:34:30.180357       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:34:36.295135       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:35:58.087614       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:36:04.861775       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:37:09.469711       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:37:20.231944       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:38:10.023844       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:38:42.747905       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:39:27.376187       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:39:48.847821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [dd21b88addb237f3d8472dcc61de839b89d21948ea83cb11a21f4ab55982667c] <==
	I0917 00:33:15.048114       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0917 00:33:15.050103       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:33:15.050156       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:33:15.050198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0917 00:33:15.051603       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 00:33:15.052580       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0917 00:33:15.052596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:33:15.052656       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0917 00:33:15.052705       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0917 00:33:15.052712       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0917 00:33:15.052716       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0917 00:33:15.072139       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:33:15.074323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:33:15.079457       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:33:15.079609       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:33:15.079783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m02"
	I0917 00:33:15.079806       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025"
	I0917 00:33:15.079783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-671025-m03"
	I0917 00:33:15.079891       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E0917 00:39:46.559964       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0917 00:39:55.062164       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062205       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062211       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062216       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:39:55.062220       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	
	
	==> kube-proxy [0f6f22dfaf3f5c42ab834fbdacc268222b9381892b372e6c6777b8cdc48ae94d] <==
	I0917 00:33:14.310969       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:33:14.385159       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:33:14.485410       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:33:14.485454       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:33:14.485579       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:33:14.505543       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:33:14.505612       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:33:14.510944       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:33:14.511517       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:33:14.511559       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:33:14.512935       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:33:14.512967       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:33:14.513038       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:33:14.513032       1 config.go:200] "Starting service config controller"
	I0917 00:33:14.513056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:33:14.513059       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:33:14.513068       1 config.go:309] "Starting node config controller"
	I0917 00:33:14.513103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:33:14.513111       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:33:14.613338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:33:14.613363       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:33:14.613385       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c7b95b9bb5f9dc570ba9c778a8fbb5b9cf9025f366845bc5684f2c97fb0f34c3] <==
	I0917 00:33:01.038603       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:33:11.582258       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0917 00:33:11.582299       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:33:11.582308       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:33:12.169895       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:33:12.169942       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:33:12.174415       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:33:12.174635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:33:12.174667       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:33:12.174692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:33:12.274752       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:37:59 ha-671025 kubelet[719]: E0917 00:37:59.717155     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069479716902039  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:09 ha-671025 kubelet[719]: E0917 00:38:09.719199     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069489718952365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:09 ha-671025 kubelet[719]: E0917 00:38:09.719231     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069489718952365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:19 ha-671025 kubelet[719]: E0917 00:38:19.720791     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069499720508720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:19 ha-671025 kubelet[719]: E0917 00:38:19.720832     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069499720508720  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:29 ha-671025 kubelet[719]: E0917 00:38:29.722482     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069509722189753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:29 ha-671025 kubelet[719]: E0917 00:38:29.722526     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069509722189753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:39 ha-671025 kubelet[719]: E0917 00:38:39.724772     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069519724406774  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:39 ha-671025 kubelet[719]: E0917 00:38:39.724820     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069519724406774  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:49 ha-671025 kubelet[719]: E0917 00:38:49.726218     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069529725971912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:49 ha-671025 kubelet[719]: E0917 00:38:49.726259     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069529725971912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:59 ha-671025 kubelet[719]: E0917 00:38:59.727787     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069539727493186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:38:59 ha-671025 kubelet[719]: E0917 00:38:59.727827     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069539727493186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:09 ha-671025 kubelet[719]: E0917 00:39:09.729035     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069549728835025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:09 ha-671025 kubelet[719]: E0917 00:39:09.729066     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069549728835025  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:19 ha-671025 kubelet[719]: E0917 00:39:19.730347     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069559730086423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:19 ha-671025 kubelet[719]: E0917 00:39:19.730386     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069559730086423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:29 ha-671025 kubelet[719]: E0917 00:39:29.731647     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069569731379538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:29 ha-671025 kubelet[719]: E0917 00:39:29.731688     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069569731379538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:39 ha-671025 kubelet[719]: E0917 00:39:39.732899     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069579732705681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:39 ha-671025 kubelet[719]: E0917 00:39:39.732940     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069579732705681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:49 ha-671025 kubelet[719]: E0917 00:39:49.734288     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069589734023750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:49 ha-671025 kubelet[719]: E0917 00:39:49.734515     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069589734023750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:59 ha-671025 kubelet[719]: E0917 00:39:59.735835     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069599735569033  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:39:59 ha-671025 kubelet[719]: E0917 00:39:59.735867     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069599735569033  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-vmzxx
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-671025 describe pod busybox-7b57f96db7-vmzxx
helpers_test.go:290: (dbg) kubectl --context ha-671025 describe pod busybox-7b57f96db7-vmzxx:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-vmzxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gsm85 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-gsm85:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  15s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13s (x2 over 15s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13s (x2 over 15s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (1053.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0917 00:41:25.128226  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:43:17.508177  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:45:14.437725  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:46:25.128616  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:47:48.193185  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:14.436153  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:25.128056  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:55:14.436792  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:25.128177  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: signal: killed (17m30.674420674s)

                                                
                                                
-- stdout --
	* [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Enabled addons: 
	
	* Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-671025-m04" worker node in "ha-671025" cluster
	* Pulling base image v0.0.48 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:40:31.754550  632515 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:40:31.754860  632515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:40:31.754871  632515 out.go:374] Setting ErrFile to fd 2...
	I0917 00:40:31.754878  632515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:40:31.755104  632515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:40:31.755658  632515 out.go:368] Setting JSON to false
	I0917 00:40:31.756720  632515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12175,"bootTime":1758057457,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:40:31.756830  632515 start.go:140] virtualization: kvm guest
	I0917 00:40:31.759551  632515 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:40:31.761385  632515 notify.go:220] Checking for updates...
	I0917 00:40:31.761413  632515 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:40:31.763139  632515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:40:31.765601  632515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:40:31.767780  632515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:40:31.769640  632515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:40:31.771454  632515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:40:31.774248  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:31.775213  632515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:40:31.802517  632515 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:40:31.802672  632515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:40:31.861960  632515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:40:31.851812235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:40:31.862083  632515 docker.go:318] overlay module found
	I0917 00:40:31.864164  632515 out.go:179] * Using the docker driver based on existing profile
	I0917 00:40:31.865836  632515 start.go:304] selected driver: docker
	I0917 00:40:31.865858  632515 start.go:918] validating driver "docker" against &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false ku
bevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:40:31.866047  632515 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:40:31.866178  632515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:40:31.926530  632515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:40:31.916687214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:40:31.927170  632515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:40:31.927200  632515 cni.go:84] Creating CNI manager for ""
	I0917 00:40:31.927261  632515 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:40:31.927310  632515 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-devic
e-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:40:31.929574  632515 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:40:31.931055  632515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:40:31.932656  632515 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:40:31.933886  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:31.933961  632515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:40:31.933976  632515 cache.go:58] Caching tarball of preloaded images
	I0917 00:40:31.934005  632515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:40:31.934112  632515 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:40:31.934126  632515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:40:31.934274  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:31.956303  632515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:40:31.956326  632515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:40:31.956371  632515 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:40:31.956431  632515 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:40:31.956502  632515 start.go:364] duration metric: took 47.858µs to acquireMachinesLock for "ha-671025"
	I0917 00:40:31.956526  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:40:31.956534  632515 fix.go:54] fixHost starting: 
	I0917 00:40:31.956740  632515 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:40:31.977595  632515 fix.go:112] recreateIfNeeded on ha-671025: state=Stopped err=<nil>
	W0917 00:40:31.977630  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:40:31.980559  632515 out.go:252] * Restarting existing docker container for "ha-671025" ...
	I0917 00:40:31.980667  632515 cli_runner.go:164] Run: docker start ha-671025
	I0917 00:40:32.235166  632515 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:40:32.255380  632515 kic.go:430] container "ha-671025" state is running.
	I0917 00:40:32.255799  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:40:32.277450  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:32.277765  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:40:32.277858  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:32.298083  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:32.298439  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:32.298458  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:40:32.299071  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53442->127.0.0.1:33203: read: connection reset by peer
	I0917 00:40:35.438793  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:40:35.438835  632515 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:40:35.438907  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:35.458591  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:35.458843  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:35.458861  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:40:35.613012  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:40:35.613101  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:35.638093  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:35.638319  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:35.638336  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:40:35.778694  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:40:35.778724  632515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:40:35.778759  632515 ubuntu.go:190] setting up certificates
	I0917 00:40:35.778776  632515 provision.go:84] configureAuth start
	I0917 00:40:35.778841  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:40:35.797658  632515 provision.go:143] copyHostCerts
	I0917 00:40:35.797701  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:35.797747  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:40:35.797756  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:35.797821  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:40:35.797913  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:35.797931  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:40:35.797937  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:35.797963  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:40:35.798027  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:35.798099  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:40:35.798109  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:35.798135  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:40:35.798202  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:40:35.941958  632515 provision.go:177] copyRemoteCerts
	I0917 00:40:35.942023  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:40:35.942062  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:35.960903  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.059750  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:40:36.059811  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:40:36.087354  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:40:36.087444  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:40:36.114513  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:40:36.114622  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:40:36.143137  632515 provision.go:87] duration metric: took 364.346394ms to configureAuth
	I0917 00:40:36.143166  632515 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:40:36.143370  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:36.143497  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.162826  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:36.163056  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:36.163075  632515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:40:36.461551  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:40:36.461583  632515 machine.go:96] duration metric: took 4.183799542s to provisionDockerMachine
	I0917 00:40:36.461598  632515 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:40:36.461611  632515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:40:36.461696  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:40:36.461774  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.482064  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.583021  632515 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:40:36.587466  632515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:40:36.587499  632515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:40:36.587507  632515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:40:36.587513  632515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:40:36.587525  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:40:36.587590  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:40:36.587663  632515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:40:36.587676  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:40:36.587758  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:40:36.598899  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:36.626439  632515 start.go:296] duration metric: took 164.821052ms for postStartSetup
	I0917 00:40:36.626531  632515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:40:36.626576  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.645992  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.741181  632515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:40:36.746062  632515 fix.go:56] duration metric: took 4.78951996s for fixHost
	I0917 00:40:36.746099  632515 start.go:83] releasing machines lock for "ha-671025", held for 4.789584259s
	I0917 00:40:36.746164  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:40:36.764980  632515 ssh_runner.go:195] Run: cat /version.json
	I0917 00:40:36.765007  632515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:40:36.765036  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.765081  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.785445  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.786559  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.878519  632515 ssh_runner.go:195] Run: systemctl --version
	I0917 00:40:36.953900  632515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:40:37.096904  632515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:40:37.102385  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:37.112665  632515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:40:37.112739  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:37.123238  632515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:40:37.123263  632515 start.go:495] detecting cgroup driver to use...
	I0917 00:40:37.123299  632515 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:40:37.123374  632515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:40:37.138404  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:40:37.151601  632515 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:40:37.151659  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:40:37.166312  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:40:37.179704  632515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:40:37.246162  632515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:40:37.315085  632515 docker.go:234] disabling docker service ...
	I0917 00:40:37.315155  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:40:37.328798  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:40:37.342782  632515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:40:37.410643  632515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:40:37.478475  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:40:37.490788  632515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:40:37.508635  632515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:40:37.508698  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.519575  632515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:40:37.519647  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.531234  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.542040  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.552460  632515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:40:37.563900  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.574568  632515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.585424  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.596307  632515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:40:37.605640  632515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:40:37.615373  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:37.676859  632515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:40:37.773658  632515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:40:37.773731  632515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:40:37.777956  632515 start.go:563] Will wait 60s for crictl version
	I0917 00:40:37.778019  632515 ssh_runner.go:195] Run: which crictl
	I0917 00:40:37.781929  632515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:40:37.820023  632515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:40:37.820131  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:37.859582  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:37.900788  632515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:40:37.902302  632515 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:40:37.921379  632515 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:40:37.925935  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:37.938981  632515 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:40:37.939161  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:37.939220  632515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:40:37.984187  632515 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:40:37.984208  632515 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:40:37.984253  632515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:40:38.022220  632515 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:40:38.022247  632515 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:40:38.022258  632515 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:40:38.022383  632515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:40:38.022487  632515 ssh_runner.go:195] Run: crio config
	I0917 00:40:38.068795  632515 cni.go:84] Creating CNI manager for ""
	I0917 00:40:38.068823  632515 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:40:38.068838  632515 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:40:38.068868  632515 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:40:38.069022  632515 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:40:38.069055  632515 kube-vip.go:115] generating kube-vip config ...
	I0917 00:40:38.069110  632515 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:40:38.083310  632515 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:40:38.083451  632515 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:40:38.083504  632515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:40:38.093822  632515 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:40:38.093953  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:40:38.104139  632515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:40:38.123612  632515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:40:38.143029  632515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:40:38.162204  632515 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:40:38.181804  632515 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:40:38.185628  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:38.198248  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:38.267211  632515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:40:38.295366  632515 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:40:38.295402  632515 certs.go:194] generating shared ca certs ...
	I0917 00:40:38.295431  632515 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:38.295582  632515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:40:38.295626  632515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:40:38.295634  632515 certs.go:256] generating profile certs ...
	I0917 00:40:38.295702  632515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:40:38.295725  632515 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9
	I0917 00:40:38.295740  632515 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:40:38.563189  632515 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9 ...
	I0917 00:40:38.563223  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9: {Name:mk2fd2bd0b9f2426e27af5b187b55653c79ecc2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:38.563427  632515 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9 ...
	I0917 00:40:38.563441  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9: {Name:mkc6ea84046c9c5b881ab3e36ceca4d0c3a5f2ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:38.563513  632515 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:40:38.563662  632515 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:40:38.563795  632515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:40:38.563812  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:40:38.563827  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:40:38.563838  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:40:38.563851  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:40:38.563861  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:40:38.563871  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:40:38.563883  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:40:38.563893  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:40:38.563944  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:40:38.563973  632515 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:40:38.563983  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:40:38.564006  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:40:38.564037  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:40:38.564057  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:40:38.564097  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:38.564123  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:38.564136  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.564148  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.564676  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:40:38.592418  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:40:38.618464  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:40:38.645113  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:40:38.671903  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:40:38.699466  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:40:38.726719  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:40:38.754384  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:40:38.781770  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:40:38.810665  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:40:38.839255  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:40:38.870949  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:40:38.892273  632515 ssh_runner.go:195] Run: openssl version
	I0917 00:40:38.900199  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:40:38.915450  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.920310  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.920382  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.928936  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:40:38.942961  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:40:38.957865  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.962632  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.962710  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.974433  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:40:38.989008  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:40:39.003069  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:39.008507  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:39.008598  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:39.020277  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:40:39.033876  632515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:40:39.039917  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:40:39.050424  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:40:39.061076  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:40:39.071182  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:40:39.081231  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:40:39.091810  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:40:39.101435  632515 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:40:39.101589  632515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:40:39.101651  632515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:40:39.144935  632515 cri.go:89] found id: "881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b"
	I0917 00:40:39.144965  632515 cri.go:89] found id: "939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726"
	I0917 00:40:39.144971  632515 cri.go:89] found id: "b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7"
	I0917 00:40:39.144976  632515 cri.go:89] found id: "5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08"
	I0917 00:40:39.144980  632515 cri.go:89] found id: "ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595"
	I0917 00:40:39.144985  632515 cri.go:89] found id: ""
	I0917 00:40:39.145041  632515 ssh_runner.go:195] Run: sudo runc list -f json
	I0917 00:40:39.166330  632515 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08","pid":899,"status":"running","bundle":"/run/containers/storage/overlay-containers/5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08/userdata","rootfs":"/var/lib/containers/storage/overlay/f8daf2d0fc83f27d37f2c17a1131a37f9eb1d0219a84c2ec4a51c2ac9aba19f0/merged","created":"2025-09-17T00:40:38.956554866Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports
\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.882883895Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system
\",\"io.kubernetes.pod.uid\":\"74a9cbd6392d4b9acfdd053de2761cb8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-671025_74a9cbd6392d4b9acfdd053de2761cb8/kube-scheduler/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f8daf2d0fc83f27d37f2c17a1131a37f9eb1d0219a84c2ec4a51c2ac9aba19f0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c6cfaaaada7cc47e15cae134822a33798e226c87792acbb4b511bcbabc03648/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3c6cfaaaada7cc47e15cae134822a33798e226c87792acbb4b511bcbabc03648","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.Std
inOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/containers/kube-scheduler/0e31211d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.hash":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.seen":"2025-09-17T00:40:38.373088265Z","kubernetes.io/config.source":"file","org.syste
md.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b","pid":936,"status":"running","bundle":"/run/containers/storage/overlay-containers/881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b/userdata","rootfs":"/var/lib/containers/storage/overlay/316dd2f04dce7007a8c676808441c6f78dd40563fa3164de617ad905ac862962/merged","created":"2025-09-17T00:40:38.986326516Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernet
es.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.918534597Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels
":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b5ccb738eb1160dc60c2973028d04964\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-671025_b5ccb738eb1160dc60c2973028d04964/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/316dd2f04dce7007a8c676808441c6f78dd40563fa3164de617ad905ac862962/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/663c2fdb6a7826331bebf88dacb2edcc2793bd89ca89f8f2a2c6ee3dddcd6b65/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"663c2fdb6a7826331bebf88dacb2edcc2793bd89ca89f8f2a2c6ee3dddcd6b65","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-67
1025_kube-system_b5ccb738eb1160dc60c2973028d04964_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/containers/kube-apiserver/adb66b20\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readon
ly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b5ccb738eb1160dc60c2973028d04964","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b5ccb738eb1160dc60c2973028d04964","kubernetes.io/config.seen":"2025-09-17T00:40:38.373084752Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec
":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726","pid":939,"status":"running","bundle":"/run/containers/storage/overlay-containers/939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726/userdata","rootfs":"/var/lib/containers/storage/overlay/8e80cca246b9d31c933201bacd6f475a4ce666ebf86e3918745046c21f32df01/merged","created":"2025-09-17T00:40:38.986209649Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"na
me\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.907118664Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ha-671025\",\"io.kubernetes.pod.namespace\"
:\"kube-system\",\"io.kubernetes.pod.uid\":\"8d1e0f98935496199c8e8278a2410d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-671025_8d1e0f98935496199c8e8278a2410d09/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e80cca246b9d31c933201bacd6f475a4ce666ebf86e3918745046c21f32df01/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cc5007dc0bc114337324c055cc351afd2237bc1485ad54a0117fa858e4782b09/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cc5007dc0bc114337324c055cc351afd2237bc1485ad54a0117fa858e4782b09","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_0","io.kubernetes.cri-o.SeccompProfileP
ath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/containers/kube-controller-manager/efc1d7f6\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}
,{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.hash":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.seen":"2025-09-17T00:40:38.3730
86693Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7","pid":907,"status":"running","bundle":"/run/containers/storage/overlay-containers/b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7/userdata","rootfs":"/var/lib/containers/storage/overlay/ab934f84f0d64a133c76c0de44ec21738c90709d51eb7ff8657b8db8c417152a/merged","created":"2025-09-17T00:40:38.95393389Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotatio
ns":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.895294523Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7817082b8b3b4ebaac6b1c6cc40fe3e\"}","io.kubernetes.cri-o.
LogPath":"/var/log/pods/kube-system_kube-vip-ha-671025_a7817082b8b3b4ebaac6b1c6cc40fe3e/kube-vip/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ab934f84f0d64a133c76c0de44ec21738c90709d51eb7ff8657b8db8c417152a/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f79cd4d6fce11a79d448a28321ed754e18f98392ba5fbdafeaf8bb1113a45b8a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f79cd4d6fce11a79d448a28321ed754e18f98392ba5fbdafeaf8bb1113a45b8a","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_p
ath\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/containers/kube-vip/8832e24d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.hash":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.seen":"2025-09-17T00:40:38.373089533Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true
","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595","pid":916,"status":"running","bundle":"/run/containers/storage/overlay-containers/ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595/userdata","rootfs":"/var/lib/containers/storage/overlay/7a6096809a9404429b3828fc8b58acae83c06741219b335c3b2b949a4220367e/merged","created":"2025-09-17T00:40:38.971421633Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.
ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.881807971Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\
":\"629bf94aa8286a4aae957269fae7c79b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-671025_629bf94aa8286a4aae957269fae7c79b/etcd/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7a6096809a9404429b3828fc8b58acae83c06741219b335c3b2b949a4220367e/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/adb3a22e9933ceddcc041c13f2cc2f963b5a59432e8bbcdfc2ff14814e4b87b0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"adb3a22e9933ceddcc041c13f2cc2f963b5a59432e8bbcdfc2ff14814e4b87b0","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"co
ntainer_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/containers/etcd/e9d2259a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"629bf94aa8286a4aae957269fae7c79b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"629bf94aa8286a4aae957
269fae7c79b","kubernetes.io/config.seen":"2025-09-17T00:40:38.373079434Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0917 00:40:39.166778  632515 cri.go:126] list returned 5 containers
	I0917 00:40:39.166798  632515 cri.go:129] container: {ID:5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08 Status:running}
	I0917 00:40:39.166821  632515 cri.go:135] skipping {5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08 running}: state = "running", want "paused"
	I0917 00:40:39.166836  632515 cri.go:129] container: {ID:881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b Status:running}
	I0917 00:40:39.166845  632515 cri.go:135] skipping {881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b running}: state = "running", want "paused"
	I0917 00:40:39.166854  632515 cri.go:129] container: {ID:939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726 Status:running}
	I0917 00:40:39.166860  632515 cri.go:135] skipping {939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726 running}: state = "running", want "paused"
	I0917 00:40:39.166869  632515 cri.go:129] container: {ID:b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7 Status:running}
	I0917 00:40:39.166874  632515 cri.go:135] skipping {b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7 running}: state = "running", want "paused"
	I0917 00:40:39.166883  632515 cri.go:129] container: {ID:ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595 Status:running}
	I0917 00:40:39.166889  632515 cri.go:135] skipping {ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595 running}: state = "running", want "paused"
	I0917 00:40:39.166941  632515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:40:39.178023  632515 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:40:39.178070  632515 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:40:39.178118  632515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:40:39.188385  632515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:40:39.188902  632515 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671025" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:40:39.189037  632515 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-517646/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671025" cluster setting kubeconfig missing "ha-671025" context setting]
	I0917 00:40:39.189368  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:39.190094  632515 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:40:39.190673  632515 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:40:39.190691  632515 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:40:39.190697  632515 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:40:39.190702  632515 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:40:39.190709  632515 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:40:39.190740  632515 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:40:39.191174  632515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:40:39.200970  632515 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:40:39.200996  632515 kubeadm.go:593] duration metric: took 22.91871ms to restartPrimaryControlPlane
	I0917 00:40:39.201006  632515 kubeadm.go:394] duration metric: took 99.589549ms to StartCluster
	I0917 00:40:39.201027  632515 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:39.201103  632515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:40:39.201826  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:39.202080  632515 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:40:39.202107  632515 start.go:241] waiting for startup goroutines ...
	I0917 00:40:39.202116  632515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:40:39.202366  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:39.205103  632515 out.go:179] * Enabled addons: 
	I0917 00:40:39.206259  632515 addons.go:514] duration metric: took 4.134791ms for enable addons: enabled=[]
	I0917 00:40:39.206295  632515 start.go:246] waiting for cluster config update ...
	I0917 00:40:39.206310  632515 start.go:255] writing updated cluster config ...
	I0917 00:40:39.208316  632515 out.go:203] 
	I0917 00:40:39.209913  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:39.210037  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:39.211628  632515 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:40:39.212849  632515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:40:39.214412  632515 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:40:39.215588  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:39.215619  632515 cache.go:58] Caching tarball of preloaded images
	I0917 00:40:39.215696  632515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:40:39.215727  632515 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:40:39.215739  632515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:40:39.215894  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:39.240756  632515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:40:39.240793  632515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:40:39.240819  632515 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:40:39.240852  632515 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:40:39.240925  632515 start.go:364] duration metric: took 51.172µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:40:39.240952  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:40:39.240974  632515 fix.go:54] fixHost starting: m02
	I0917 00:40:39.241212  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:40:39.262782  632515 fix.go:112] recreateIfNeeded on ha-671025-m02: state=Stopped err=<nil>
	W0917 00:40:39.262826  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:40:39.264705  632515 out.go:252] * Restarting existing docker container for "ha-671025-m02" ...
	I0917 00:40:39.264774  632515 cli_runner.go:164] Run: docker start ha-671025-m02
	I0917 00:40:39.525550  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:40:39.548227  632515 kic.go:430] container "ha-671025-m02" state is running.
	I0917 00:40:39.548819  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:40:39.573516  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:39.573761  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:40:39.573819  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:39.595101  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:39.595449  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:39.595465  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:40:39.596146  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57494->127.0.0.1:33208: read: connection reset by peer
	I0917 00:40:42.744302  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:40:42.744341  632515 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:40:42.744440  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:42.772727  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:42.773041  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:42.773066  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:40:42.966840  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:40:42.966938  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:42.999313  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:42.999622  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:42.999654  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:40:43.166450  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:40:43.166486  632515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:40:43.166512  632515 ubuntu.go:190] setting up certificates
	I0917 00:40:43.166528  632515 provision.go:84] configureAuth start
	I0917 00:40:43.166598  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:40:43.191986  632515 provision.go:143] copyHostCerts
	I0917 00:40:43.192036  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:43.192077  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:40:43.192090  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:43.192181  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:40:43.192299  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:43.192337  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:40:43.192347  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:43.192424  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:40:43.192541  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:43.192561  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:40:43.192566  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:43.192607  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:40:43.192708  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:40:43.430833  632515 provision.go:177] copyRemoteCerts
	I0917 00:40:43.430920  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:40:43.430997  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:43.459960  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:43.568596  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:40:43.568675  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:40:43.595799  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:40:43.595866  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:40:43.622160  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:40:43.622224  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:40:43.650486  632515 provision.go:87] duration metric: took 483.938346ms to configureAuth
	I0917 00:40:43.650520  632515 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:40:43.650749  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:43.650849  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:43.669815  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:43.670087  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:43.670108  632515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:40:44.121666  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:40:44.121696  632515 machine.go:96] duration metric: took 4.547919987s to provisionDockerMachine
	I0917 00:40:44.121708  632515 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:40:44.121722  632515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:40:44.121789  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:40:44.121842  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.144239  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.248012  632515 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:40:44.252106  632515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:40:44.252137  632515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:40:44.252145  632515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:40:44.252153  632515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:40:44.252168  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:40:44.252230  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:40:44.252311  632515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:40:44.252321  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:40:44.252424  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:40:44.262184  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:44.291527  632515 start.go:296] duration metric: took 169.798795ms for postStartSetup
	I0917 00:40:44.291632  632515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:40:44.291683  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.312473  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.406975  632515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:40:44.411956  632515 fix.go:56] duration metric: took 5.170985164s for fixHost
	I0917 00:40:44.411984  632515 start.go:83] releasing machines lock for "ha-671025-m02", held for 5.171045077s
	I0917 00:40:44.412067  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:40:44.433399  632515 out.go:179] * Found network options:
	I0917 00:40:44.434772  632515 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:40:44.436118  632515 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:40:44.436158  632515 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:40:44.436226  632515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:40:44.436275  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.436331  632515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:40:44.436542  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.456132  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.456175  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.691367  632515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:40:44.696760  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:44.706855  632515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:40:44.706939  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:44.717107  632515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:40:44.717138  632515 start.go:495] detecting cgroup driver to use...
	I0917 00:40:44.717177  632515 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:40:44.717226  632515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:40:44.731567  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:40:44.745939  632515 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:40:44.745990  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:40:44.763319  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:40:44.776506  632515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:40:44.894007  632515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:40:45.038909  632515 docker.go:234] disabling docker service ...
	I0917 00:40:45.038982  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:40:45.053638  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:40:45.066893  632515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:40:45.205587  632515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:40:45.364462  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:40:45.383628  632515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:40:45.405497  632515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:40:45.405564  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.416825  632515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:40:45.416919  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.428902  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.443620  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.455563  632515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:40:45.466416  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.478152  632515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.490283  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.502127  632515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:40:45.512246  632515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:40:45.521843  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:45.640461  632515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:40:45.896355  632515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:40:45.896473  632515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:40:45.900956  632515 start.go:563] Will wait 60s for crictl version
	I0917 00:40:45.901026  632515 ssh_runner.go:195] Run: which crictl
	I0917 00:40:45.905222  632515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:40:45.942130  632515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:40:45.942214  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:45.980992  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:46.023154  632515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:40:46.024799  632515 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:40:46.026246  632515 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:40:46.045491  632515 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:40:46.049717  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:46.061967  632515 mustload.go:65] Loading cluster: ha-671025
	I0917 00:40:46.062188  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:46.062431  632515 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:40:46.080226  632515 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:40:46.080512  632515 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:40:46.080525  632515 certs.go:194] generating shared ca certs ...
	I0917 00:40:46.080543  632515 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:46.080697  632515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:40:46.080772  632515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:40:46.080790  632515 certs.go:256] generating profile certs ...
	I0917 00:40:46.080890  632515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:40:46.080964  632515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c
	I0917 00:40:46.081013  632515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:40:46.081029  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:40:46.081049  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:40:46.081088  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:40:46.081108  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:40:46.081127  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:40:46.081145  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:40:46.081164  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:40:46.081180  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:40:46.081259  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:40:46.081301  632515 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:40:46.081315  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:40:46.081346  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:40:46.081376  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:40:46.081438  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:40:46.081493  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:46.081540  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.081561  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.081587  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.081702  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:46.101025  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:46.189723  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:40:46.194282  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:40:46.215250  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:40:46.220905  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:40:46.238548  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:40:46.243187  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:40:46.259431  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:40:46.263838  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:40:46.278404  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:40:46.282305  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:40:46.297261  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:40:46.301896  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:40:46.316846  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:40:46.346007  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:40:46.376478  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:40:46.405429  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:40:46.433262  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:40:46.462010  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:40:46.490142  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:40:46.518271  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:40:46.546483  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:40:46.574948  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:40:46.603480  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:40:46.632648  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:40:46.654796  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:40:46.676468  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:40:46.697823  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:40:46.718611  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:40:46.740412  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:40:46.763172  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:40:46.784790  632515 ssh_runner.go:195] Run: openssl version
	I0917 00:40:46.791348  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:40:46.802517  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.806431  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.806479  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.813628  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:40:46.824091  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:40:46.835716  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.839866  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.839925  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.847187  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:40:46.857010  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:40:46.867839  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.871864  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.871928  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.879300  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:40:46.889305  632515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:40:46.893181  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:40:46.900268  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:40:46.907385  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:40:46.914194  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:40:46.921136  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:40:46.927929  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:40:46.934672  632515 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:40:46.934768  632515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:40:46.934793  632515 kube-vip.go:115] generating kube-vip config ...
	I0917 00:40:46.934825  632515 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:40:46.949032  632515 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:40:46.949125  632515 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:40:46.949189  632515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:40:46.958935  632515 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:40:46.958997  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:40:46.969133  632515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:40:46.989052  632515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:40:47.009277  632515 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:40:47.030373  632515 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:40:47.034630  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:47.046734  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:47.153601  632515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:40:47.166587  632515 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:40:47.166924  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:47.169412  632515 out.go:179] * Verifying Kubernetes components...
	I0917 00:40:47.170627  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:47.282243  632515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:40:47.295175  632515 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:40:47.295250  632515 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:40:47.295529  632515 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:40:47.304206  632515 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:40:47.304237  632515 node_ready.go:38] duration metric: took 8.673255ms for node "ha-671025-m02" to be "Ready" ...
	I0917 00:40:47.304254  632515 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:40:47.304311  632515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:40:47.316591  632515 api_server.go:72] duration metric: took 149.952703ms to wait for apiserver process to appear ...
	I0917 00:40:47.316615  632515 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:40:47.316635  632515 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:40:47.322489  632515 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:40:47.323523  632515 api_server.go:141] control plane version: v1.34.0
	I0917 00:40:47.323550  632515 api_server.go:131] duration metric: took 6.928789ms to wait for apiserver health ...
	I0917 00:40:47.323558  632515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:40:47.329799  632515 system_pods.go:59] 24 kube-system pods found
	I0917 00:40:47.329836  632515 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.329843  632515 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.329851  632515 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.329857  632515 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.329861  632515 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:40:47.329864  632515 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:40:47.329868  632515 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:40:47.329874  632515 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:40:47.329879  632515 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.329888  632515 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.329893  632515 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:40:47.329901  632515 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.329908  632515 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.329912  632515 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:40:47.329918  632515 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:40:47.329922  632515 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:40:47.329925  632515 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:40:47.329930  632515 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.329937  632515 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.329941  632515 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:40:47.329946  632515 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:40:47.329949  632515 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:40:47.329952  632515 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:40:47.329954  632515 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:40:47.329960  632515 system_pods.go:74] duration metric: took 6.396975ms to wait for pod list to return data ...
	I0917 00:40:47.329969  632515 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:40:47.333216  632515 default_sa.go:45] found service account: "default"
	I0917 00:40:47.333237  632515 default_sa.go:55] duration metric: took 3.262813ms for default service account to be created ...
	I0917 00:40:47.333246  632515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:40:47.338819  632515 system_pods.go:86] 24 kube-system pods found
	I0917 00:40:47.338855  632515 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.338863  632515 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.338871  632515 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.338877  632515 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.338881  632515 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:40:47.338885  632515 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:40:47.338888  632515 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:40:47.338891  632515 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:40:47.338896  632515 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.338903  632515 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.338910  632515 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:40:47.338916  632515 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.338921  632515 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.338928  632515 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:40:47.338932  632515 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:40:47.338936  632515 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:40:47.338939  632515 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:40:47.338946  632515 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.338951  632515 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.338956  632515 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:40:47.338959  632515 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:40:47.338962  632515 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:40:47.338965  632515 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:40:47.338968  632515 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:40:47.338975  632515 system_pods.go:126] duration metric: took 5.723447ms to wait for k8s-apps to be running ...
	I0917 00:40:47.338984  632515 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:40:47.339032  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:40:47.352522  632515 system_svc.go:56] duration metric: took 13.515878ms WaitForService to wait for kubelet
	I0917 00:40:47.352562  632515 kubeadm.go:578] duration metric: took 185.927121ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:40:47.352585  632515 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:40:47.356328  632515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:40:47.356359  632515 node_conditions.go:123] node cpu capacity is 8
	I0917 00:40:47.356373  632515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:40:47.356379  632515 node_conditions.go:123] node cpu capacity is 8
	I0917 00:40:47.356385  632515 node_conditions.go:105] duration metric: took 3.794845ms to run NodePressure ...
	I0917 00:40:47.356411  632515 start.go:241] waiting for startup goroutines ...
	I0917 00:40:47.356443  632515 start.go:255] writing updated cluster config ...
	I0917 00:40:47.358857  632515 out.go:203] 
	I0917 00:40:47.360340  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:47.360490  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:47.362332  632515 out.go:179] * Starting "ha-671025-m04" worker node in "ha-671025" cluster
	I0917 00:40:47.363542  632515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:40:47.364625  632515 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:40:47.365563  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:47.365591  632515 cache.go:58] Caching tarball of preloaded images
	I0917 00:40:47.365656  632515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:40:47.365708  632515 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:40:47.365722  632515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:40:47.365844  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:47.387506  632515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:40:47.387525  632515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:40:47.387542  632515 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:40:47.387573  632515 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:40:47.387634  632515 start.go:364] duration metric: took 39.357µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:40:47.387655  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:40:47.387662  632515 fix.go:54] fixHost starting: m04
	I0917 00:40:47.387922  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:40:47.405966  632515 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:40:47.406001  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:40:47.407782  632515 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:40:47.407855  632515 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:40:47.672894  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:40:47.693808  632515 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:40:47.694266  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:40:47.716290  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:47.716578  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:40:47.716642  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:40:47.738438  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:47.738710  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:40:47.738727  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:40:47.739696  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35420->127.0.0.1:33213: read: connection reset by peer
	I0917 00:40:50.777847  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:40:53.815804  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:40:56.852418  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:40:59.889985  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:02.927878  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:05.965835  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:09.003962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:12.040765  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:15.078604  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:18.115199  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:21.153105  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:24.191790  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:27.228233  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:30.265101  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:33.302557  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:36.340672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:39.378046  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:42.415716  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:45.454188  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:48.490805  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:51.528343  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:54.567745  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:57.604876  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:00.641962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:03.679512  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:06.716775  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:09.753499  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:12.792026  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:15.830034  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:18.867326  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:21.904726  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:24.942818  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:27.980856  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:31.017825  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:34.057439  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:37.095319  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:40.132348  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:43.169264  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:46.207659  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:49.243571  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:52.280712  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:55.318863  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:58.355739  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:01.394676  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:04.432495  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:07.470482  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:10.507715  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:13.545305  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:16.581942  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:19.619242  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:22.656519  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:25.694601  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:28.732296  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:31.770235  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:34.807509  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:37.844968  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:40.882655  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:43.920508  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:46.958281  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:49.959450  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:43:49.959523  632515 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:43:49.959627  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:43:49.979209  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:43:49.979506  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:43:49.979526  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m04 && echo "ha-671025-m04" | sudo tee /etc/hostname
	I0917 00:43:50.016366  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:53.053427  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:56.091065  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:59.129444  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:02.166433  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:05.205459  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:08.242233  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:11.281526  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:14.322545  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:17.359681  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:20.396907  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:23.434430  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:26.472879  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:29.509629  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:32.546351  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:35.585714  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:38.624441  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:41.662155  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:44.702330  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:47.739809  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:50.777911  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:53.815936  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:56.853055  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:59.890033  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:02.927223  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:05.964844  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:09.003164  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:12.040941  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:15.078544  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:18.117352  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:21.153477  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:24.192119  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:27.229944  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:30.267815  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:33.305445  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:36.341603  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:39.379044  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:42.415843  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:45.454832  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:48.491973  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:51.529496  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:54.567726  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:57.605234  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:00.642229  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:03.679834  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:06.717361  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:09.754552  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:12.790977  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:15.830180  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:18.867082  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:21.904971  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:24.943011  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:27.981089  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:31.019069  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:34.057510  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:37.094826  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:40.131199  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:43.168012  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:46.205355  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:49.241870  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:52.242138  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:46:52.242268  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:46:52.264751  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:46:52.265071  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:46:52.265100  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:46:52.301891  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:55.339328  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:58.376434  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:01.415546  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:04.453672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:07.490248  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:10.527693  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:13.564020  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:16.602874  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:19.639113  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:22.676268  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:25.714627  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:28.752642  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:31.790017  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:34.828809  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:37.865379  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:40.901989  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:43.940340  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:46.977650  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:50.014834  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:53.055009  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:56.092109  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:59.129867  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:02.166598  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:05.205606  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:08.242967  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:11.278983  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:14.317476  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:17.354034  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:20.391033  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:23.428672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:26.466615  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:29.504285  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:32.541494  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:35.579576  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:38.616731  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:41.657886  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:44.695765  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:47.733775  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:50.771259  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:53.809696  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:56.847643  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:59.883590  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:02.921110  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:05.959275  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:08.996121  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:12.032654  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:15.071174  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:18.107540  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:21.145559  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:24.184308  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:27.222577  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:30.259673  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:33.298643  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:36.336797  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:39.373689  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:42.413133  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:45.452088  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:48.490720  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:51.529779  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:54.531505  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:49:54.531573  632515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:49:54.531626  632515 ubuntu.go:190] setting up certificates
	I0917 00:49:54.531647  632515 provision.go:84] configureAuth start
	I0917 00:49:54.531739  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:49:54.551339  632515 provision.go:143] copyHostCerts
	I0917 00:49:54.551429  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:54.551478  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:49:54.551489  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:54.551576  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:49:54.551695  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:54.551716  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:49:54.551724  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:54.551770  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:49:54.551842  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:54.551862  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:49:54.551870  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:54.551909  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:49:54.551987  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:49:55.075317  632515 provision.go:177] copyRemoteCerts
	I0917 00:49:55.075413  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:49:55.075466  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:49:55.094562  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:49:55.131095  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:55.131145  632515 retry.go:31] will retry after 181.743857ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:55.349302  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:55.349337  632515 retry.go:31] will retry after 327.982556ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:55.713462  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:55.713496  632515 retry.go:31] will retry after 348.016843ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:56.097960  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.097998  632515 retry.go:31] will retry after 483.850248ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:56.619626  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.619759  632515 retry.go:31] will retry after 144.183744ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.765023  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:49:56.784089  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:49:56.821621  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.821666  632515 retry.go:31] will retry after 278.594161ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:57.137033  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:57.137068  632515 retry.go:31] will retry after 428.68953ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:57.603586  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:57.603622  632515 retry.go:31] will retry after 735.913432ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:58.377129  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.377217  632515 provision.go:87] duration metric: took 3.845563473s to configureAuth
	W0917 00:49:58.377227  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.377241  632515 retry.go:31] will retry after 106.534µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.378407  632515 provision.go:84] configureAuth start
	I0917 00:49:58.378491  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:49:58.396865  632515 provision.go:143] copyHostCerts
	I0917 00:49:58.396914  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:58.396954  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:49:58.396964  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:58.397051  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:49:58.397179  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:58.397209  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:49:58.397215  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:58.397247  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:49:58.397342  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:58.397378  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:49:58.397384  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:58.397427  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:49:58.397525  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:49:58.711543  632515 provision.go:177] copyRemoteCerts
	I0917 00:49:58.711617  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:49:58.711656  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:49:58.732044  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:49:58.768196  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.768239  632515 retry.go:31] will retry after 272.740384ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:59.077518  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:59.077563  632515 retry.go:31] will retry after 353.940506ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:59.468351  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:59.468419  632515 retry.go:31] will retry after 790.243256ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:00.295054  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.295156  632515 retry.go:31] will retry after 230.050538ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.525535  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:00.546341  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:00.583328  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.583366  632515 retry.go:31] will retry after 350.741503ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:00.970853  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.970893  632515 retry.go:31] will retry after 300.695459ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:01.309524  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.309557  632515 retry.go:31] will retry after 595.595625ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:01.943226  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.943326  632515 provision.go:87] duration metric: took 3.564901302s to configureAuth
	W0917 00:50:01.943340  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.943370  632515 retry.go:31] will retry after 82.092µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.944551  632515 provision.go:84] configureAuth start
	I0917 00:50:01.944631  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:01.964075  632515 provision.go:143] copyHostCerts
	I0917 00:50:01.964128  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:01.964160  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:01.964174  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:01.964250  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:01.964378  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:01.964422  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:01.964429  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:01.964463  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:01.964551  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:01.964576  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:01.964584  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:01.964616  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:01.964708  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:02.030303  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:02.030365  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:02.030421  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:02.050138  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:02.086170  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:02.086227  632515 retry.go:31] will retry after 299.253149ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:02.422896  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:02.422925  632515 retry.go:31] will retry after 210.347632ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:02.671216  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:02.671255  632515 retry.go:31] will retry after 814.790488ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:03.521857  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.521954  632515 retry.go:31] will retry after 176.199116ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.698338  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:03.716938  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:03.753247  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.753288  632515 retry.go:31] will retry after 155.234551ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:03.945915  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.945949  632515 retry.go:31] will retry after 523.325975ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:04.505459  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:04.505496  632515 retry.go:31] will retry after 744.659161ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:05.286909  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:05.287029  632515 provision.go:87] duration metric: took 3.342456692s to configureAuth
	W0917 00:50:05.287040  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:05.287056  632515 retry.go:31] will retry after 174.81µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:05.288151  632515 provision.go:84] configureAuth start
	I0917 00:50:05.288248  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:05.307557  632515 provision.go:143] copyHostCerts
	I0917 00:50:05.307595  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:05.307622  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:05.307631  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:05.307690  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:05.307771  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:05.307789  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:05.307793  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:05.307813  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:05.307910  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:05.307938  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:05.307948  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:05.307977  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:05.308069  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:06.124049  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:06.124110  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:06.124147  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:06.142960  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:06.179541  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:06.179577  632515 retry.go:31] will retry after 253.641842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:06.470694  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:06.470724  632515 retry.go:31] will retry after 361.06837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:06.869140  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:06.869183  632515 retry.go:31] will retry after 748.337326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:07.654341  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:07.654488  632515 retry.go:31] will retry after 302.218349ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:07.957049  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:07.975836  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:08.012335  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:08.012373  632515 retry.go:31] will retry after 343.545558ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:08.393469  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:08.393509  632515 retry.go:31] will retry after 292.709088ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:08.722910  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:08.722952  632515 retry.go:31] will retry after 782.245002ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:09.542622  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:09.542713  632515 provision.go:87] duration metric: took 4.254541048s to configureAuth
	W0917 00:50:09.542725  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:09.542740  632515 retry.go:31] will retry after 363.465µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:09.543896  632515 provision.go:84] configureAuth start
	I0917 00:50:09.543987  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:09.563254  632515 provision.go:143] copyHostCerts
	I0917 00:50:09.563298  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:09.563342  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:09.563350  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:09.563447  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:09.563550  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:09.563569  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:09.563574  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:09.563599  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:09.563658  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:09.563679  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:09.563682  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:09.563701  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:09.563770  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:10.100678  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:10.100740  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:10.100776  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:10.120637  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:10.159175  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:10.159210  632515 retry.go:31] will retry after 316.977532ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:10.512855  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:10.512910  632515 retry.go:31] will retry after 206.602874ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:10.757756  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:10.757791  632515 retry.go:31] will retry after 388.38065ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:11.183258  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:11.183293  632515 retry.go:31] will retry after 551.25599ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:11.772010  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:11.772120  632515 retry.go:31] will retry after 288.087276ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.060552  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:12.079987  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:12.117424  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.117463  632515 retry.go:31] will retry after 255.354599ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:12.409744  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.409776  632515 retry.go:31] will retry after 522.962893ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:12.970294  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.970350  632515 retry.go:31] will retry after 438.867721ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:13.446548  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.446669  632515 provision.go:87] duration metric: took 3.902748058s to configureAuth
	W0917 00:50:13.446683  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.446698  632515 retry.go:31] will retry after 468.526µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.447846  632515 provision.go:84] configureAuth start
	I0917 00:50:13.447950  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:13.467144  632515 provision.go:143] copyHostCerts
	I0917 00:50:13.467203  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:13.467237  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:13.467253  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:13.467326  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:13.467466  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:13.467488  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:13.467493  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:13.467517  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:13.467581  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:13.467598  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:13.467604  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:13.467624  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:13.467732  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:13.870974  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:13.871042  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:13.871085  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:13.889812  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:13.926496  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.926545  632515 retry.go:31] will retry after 267.505033ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:14.231498  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:14.231534  632515 retry.go:31] will retry after 522.902976ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:14.791171  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:14.791205  632515 retry.go:31] will retry after 739.615653ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:15.567533  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:15.567636  632515 retry.go:31] will retry after 232.900985ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:15.801150  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:15.819485  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:15.855915  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:15.855948  632515 retry.go:31] will retry after 279.418591ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:16.173138  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:16.173186  632515 retry.go:31] will retry after 265.737704ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:16.477676  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:16.477709  632515 retry.go:31] will retry after 702.578423ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:17.216952  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.217096  632515 provision.go:87] duration metric: took 3.769225472s to configureAuth
	W0917 00:50:17.217109  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.217124  632515 retry.go:31] will retry after 917.898µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.218282  632515 provision.go:84] configureAuth start
	I0917 00:50:17.218375  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:17.237626  632515 provision.go:143] copyHostCerts
	I0917 00:50:17.237669  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:17.237705  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:17.237716  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:17.237768  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:17.237859  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:17.237878  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:17.237882  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:17.237911  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:17.237968  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:17.237991  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:17.237996  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:17.238025  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:17.238106  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:17.295733  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:17.295811  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:17.295864  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:17.315495  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:17.351525  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.351562  632515 retry.go:31] will retry after 278.460935ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:17.666932  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.666969  632515 retry.go:31] will retry after 353.734866ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:18.057920  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:18.057958  632515 retry.go:31] will retry after 706.602278ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:18.802736  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:18.802814  632515 retry.go:31] will retry after 187.543888ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:18.991326  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:19.010215  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:19.046936  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:19.046968  632515 retry.go:31] will retry after 181.982762ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:19.265359  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:19.265415  632515 retry.go:31] will retry after 426.438339ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:19.728051  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:19.728089  632515 retry.go:31] will retry after 494.698101ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:20.260104  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.260143  632515 retry.go:31] will retry after 546.342664ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:20.843132  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.843234  632515 provision.go:87] duration metric: took 3.624926933s to configureAuth
	W0917 00:50:20.843248  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.843260  632515 retry.go:31] will retry after 614.342µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.844436  632515 provision.go:84] configureAuth start
	I0917 00:50:20.844517  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:20.863058  632515 provision.go:143] copyHostCerts
	I0917 00:50:20.863099  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:20.863129  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:20.863138  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:20.863192  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:20.863270  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:20.863287  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:20.863293  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:20.863326  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:20.863373  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:20.863408  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:20.863418  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:20.863443  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:20.863501  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:21.547579  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:21.547640  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:21.547689  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:21.567099  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:21.603139  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:21.603173  632515 retry.go:31] will retry after 354.905304ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:21.994839  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:21.994871  632515 retry.go:31] will retry after 230.336886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:22.262896  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:22.262933  632515 retry.go:31] will retry after 470.238343ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:22.769438  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:22.769478  632515 retry.go:31] will retry after 775.977166ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:23.582257  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.582369  632515 provision.go:87] duration metric: took 2.737910901s to configureAuth
	W0917 00:50:23.582382  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.582428  632515 retry.go:31] will retry after 1.384293ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.584647  632515 provision.go:84] configureAuth start
	I0917 00:50:23.584721  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:23.604649  632515 provision.go:143] copyHostCerts
	I0917 00:50:23.604691  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:23.604726  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:23.604738  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:23.604803  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:23.604906  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:23.604928  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:23.604937  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:23.604972  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:23.605082  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:23.605108  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:23.605117  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:23.605186  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:23.605289  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:23.929770  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:23.929834  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:23.929882  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:23.950551  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:23.986773  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.986827  632515 retry.go:31] will retry after 191.045816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:24.215077  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:24.215159  632515 retry.go:31] will retry after 367.654178ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:24.619976  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:24.620013  632515 retry.go:31] will retry after 667.754811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:25.324805  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.324901  632515 retry.go:31] will retry after 226.841471ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.552443  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:25.572474  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:25.608798  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.608828  632515 retry.go:31] will retry after 261.920271ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:25.907792  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.907829  632515 retry.go:31] will retry after 224.736719ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:26.169079  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.169236  632515 retry.go:31] will retry after 469.609314ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:26.676774  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.676905  632515 provision.go:87] duration metric: took 3.092235264s to configureAuth
	W0917 00:50:26.676919  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.676935  632515 retry.go:31] will retry after 1.322684ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.679211  632515 provision.go:84] configureAuth start
	I0917 00:50:26.679326  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:26.699028  632515 provision.go:143] copyHostCerts
	I0917 00:50:26.699074  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:26.699113  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:26.699122  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:26.699179  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:26.699263  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:26.699281  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:26.699287  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:26.699322  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:26.699435  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:26.699458  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:26.699464  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:26.699486  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:26.699541  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:26.883507  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:26.883571  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:26.883610  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:26.901909  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:26.938113  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.938146  632515 retry.go:31] will retry after 134.491037ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:27.109871  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:27.109912  632515 retry.go:31] will retry after 526.197976ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:27.673521  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:27.673555  632515 retry.go:31] will retry after 585.726632ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:28.297059  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.297095  632515 retry.go:31] will retry after 528.356861ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:28.863599  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.863707  632515 provision.go:87] duration metric: took 2.184468569s to configureAuth
	W0917 00:50:28.863723  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.863738  632515 retry.go:31] will retry after 5.073321ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.868924  632515 provision.go:84] configureAuth start
	I0917 00:50:28.869023  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:28.887951  632515 provision.go:143] copyHostCerts
	I0917 00:50:28.887998  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:28.888029  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:28.888039  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:28.888105  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:28.888201  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:28.888223  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:28.888233  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:28.888267  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:28.888349  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:28.888374  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:28.888382  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:28.888425  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:28.888506  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:28.973999  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:28.974061  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:28.974105  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:28.993851  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:29.030823  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:29.030857  632515 retry.go:31] will retry after 289.215993ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:29.356949  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:29.356981  632515 retry.go:31] will retry after 495.318582ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:29.888829  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:29.888863  632515 retry.go:31] will retry after 628.473012ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:30.554178  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:30.554268  632515 retry.go:31] will retry after 195.67279ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:30.750597  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:30.768976  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:30.805780  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:30.805817  632515 retry.go:31] will retry after 162.662176ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:31.005739  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:31.005782  632515 retry.go:31] will retry after 501.550591ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:31.543556  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:31.543585  632515 retry.go:31] will retry after 654.512353ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:32.234876  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.234982  632515 provision.go:87] duration metric: took 3.366029278s to configureAuth
	W0917 00:50:32.234996  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.235011  632515 retry.go:31] will retry after 4.423458ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.240271  632515 provision.go:84] configureAuth start
	I0917 00:50:32.240382  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:32.260973  632515 provision.go:143] copyHostCerts
	I0917 00:50:32.261040  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:32.261072  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:32.261082  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:32.261135  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:32.261251  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:32.261275  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:32.261280  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:32.261305  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:32.261350  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:32.261373  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:32.261380  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:32.261427  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:32.261492  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:32.576811  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:32.576898  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:32.576946  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:32.594876  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:32.631272  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.631304  632515 retry.go:31] will retry after 159.534115ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:32.828830  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.828873  632515 retry.go:31] will retry after 525.910165ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:33.391768  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:33.391811  632515 retry.go:31] will retry after 487.290507ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:33.916025  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:33.916061  632515 retry.go:31] will retry after 426.666789ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:34.380994  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.381113  632515 provision.go:87] duration metric: took 2.140814482s to configureAuth
	W0917 00:50:34.381127  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.381151  632515 retry.go:31] will retry after 4.999439ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.386421  632515 provision.go:84] configureAuth start
	I0917 00:50:34.386521  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:34.405489  632515 provision.go:143] copyHostCerts
	I0917 00:50:34.405536  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:34.405566  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:34.405584  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:34.405640  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:34.405718  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:34.405736  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:34.405743  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:34.405762  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:34.405816  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:34.405834  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:34.405838  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:34.405858  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:34.405912  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:34.645184  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:34.645253  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:34.645292  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:34.664718  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:34.700962  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.701003  632515 retry.go:31] will retry after 219.116738ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:34.956072  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.956145  632515 retry.go:31] will retry after 526.047595ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:35.518345  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:35.518380  632515 retry.go:31] will retry after 696.668276ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:36.252208  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.252303  632515 retry.go:31] will retry after 330.708312ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.583965  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:36.602741  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:36.638646  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.638688  632515 retry.go:31] will retry after 278.757425ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:36.954355  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.954410  632515 retry.go:31] will retry after 226.711803ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:37.220262  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:37.220310  632515 retry.go:31] will retry after 749.165652ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:38.006557  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.006589  632515 retry.go:31] will retry after 482.349257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:38.526080  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.526178  632515 provision.go:87] duration metric: took 4.139727646s to configureAuth
	W0917 00:50:38.526188  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.526212  632515 retry.go:31] will retry after 19.037245ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.545416  632515 provision.go:84] configureAuth start
	I0917 00:50:38.545541  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:38.566128  632515 provision.go:143] copyHostCerts
	I0917 00:50:38.566171  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:38.566202  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:38.566208  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:38.566271  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:38.566349  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:38.566368  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:38.566372  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:38.566416  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:38.566482  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:38.566502  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:38.566507  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:38.566526  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:38.566593  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:38.991903  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:38.991971  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:38.992013  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:39.011347  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:39.050038  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:39.050073  632515 retry.go:31] will retry after 337.988535ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:39.425023  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:39.425081  632515 retry.go:31] will retry after 500.505537ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:39.962290  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:39.962331  632515 retry.go:31] will retry after 503.789672ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:40.503420  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:40.503518  632515 retry.go:31] will retry after 333.367854ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:40.837065  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:40.856774  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:40.894359  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:40.894416  632515 retry.go:31] will retry after 222.689334ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:41.154246  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:41.154287  632515 retry.go:31] will retry after 282.589186ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:41.474233  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:41.474271  632515 retry.go:31] will retry after 651.602213ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:42.162200  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.162235  632515 retry.go:31] will retry after 552.404672ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:42.752279  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.752412  632515 provision.go:87] duration metric: took 4.206938108s to configureAuth
	W0917 00:50:42.752426  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.752443  632515 retry.go:31] will retry after 18.126258ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.771710  632515 provision.go:84] configureAuth start
	I0917 00:50:42.771828  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:42.790293  632515 provision.go:143] copyHostCerts
	I0917 00:50:42.790346  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:42.790378  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:42.790398  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:42.790463  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:42.790563  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:42.790598  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:42.790608  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:42.790681  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:42.790749  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:42.790775  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:42.790787  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:42.790819  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:42.791233  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:42.868607  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:42.868675  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:42.868711  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:42.888168  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:42.925190  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.925226  632515 retry.go:31] will retry after 290.318239ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:43.251563  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:43.251597  632515 retry.go:31] will retry after 468.433406ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:43.756730  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:43.756769  632515 retry.go:31] will retry after 614.415077ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:44.408758  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:44.408845  632515 retry.go:31] will retry after 201.201149ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:44.610310  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:44.629682  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:44.666478  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:44.666513  632515 retry.go:31] will retry after 335.575333ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:45.039687  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:45.039722  632515 retry.go:31] will retry after 325.495793ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:45.402130  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:45.402167  632515 retry.go:31] will retry after 665.343507ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:46.105384  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.105501  632515 provision.go:87] duration metric: took 3.333748619s to configureAuth
	W0917 00:50:46.105514  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.105530  632515 retry.go:31] will retry after 26.362188ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.132797  632515 provision.go:84] configureAuth start
	I0917 00:50:46.132913  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:46.151606  632515 provision.go:143] copyHostCerts
	I0917 00:50:46.151650  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:46.151683  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:46.151693  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:46.151749  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:46.151834  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:46.151854  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:46.151859  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:46.151879  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:46.151925  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:46.151941  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:46.151947  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:46.151965  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:46.152015  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:46.678008  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:46.678077  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:46.678115  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:46.697254  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:46.733438  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.733466  632515 retry.go:31] will retry after 278.597162ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:47.050972  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:47.051022  632515 retry.go:31] will retry after 188.61489ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:47.276353  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:47.276422  632515 retry.go:31] will retry after 668.98273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:47.984108  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:47.984145  632515 retry.go:31] will retry after 606.369731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:48.628443  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.628556  632515 provision.go:87] duration metric: took 2.495723391s to configureAuth
	W0917 00:50:48.628570  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.628587  632515 retry.go:31] will retry after 64.390783ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.693858  632515 provision.go:84] configureAuth start
	I0917 00:50:48.693987  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:48.713843  632515 provision.go:143] copyHostCerts
	I0917 00:50:48.713892  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:48.713929  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:48.713945  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:48.714004  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:48.714086  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:48.714107  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:48.714114  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:48.714135  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:48.714184  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:48.714201  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:48.714204  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:48.714222  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:48.714276  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:48.895697  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:48.895760  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:48.895811  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:48.914428  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:48.950712  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.950744  632515 retry.go:31] will retry after 178.741801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:49.166254  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:49.166296  632515 retry.go:31] will retry after 501.407422ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:49.703996  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:49.704033  632515 retry.go:31] will retry after 817.867259ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:50.560617  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:50.560706  632515 retry.go:31] will retry after 312.243953ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:50.873217  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:50.891443  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:50.926995  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:50.927027  632515 retry.go:31] will retry after 156.916989ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:51.120257  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:51.120290  632515 retry.go:31] will retry after 438.534255ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:51.596576  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:51.596617  632515 retry.go:31] will retry after 414.358837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:52.048272  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.048406  632515 provision.go:87] duration metric: took 3.354481141s to configureAuth
	W0917 00:50:52.048419  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.048435  632515 retry.go:31] will retry after 61.191343ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.110719  632515 provision.go:84] configureAuth start
	I0917 00:50:52.110826  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:52.128699  632515 provision.go:143] copyHostCerts
	I0917 00:50:52.128752  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:52.128784  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:52.128796  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:52.128877  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:52.128987  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:52.129058  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:52.129066  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:52.129093  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:52.129152  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:52.129170  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:52.129177  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:52.129196  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:52.129259  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:52.433622  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:52.433690  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:52.433739  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:52.453878  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:52.490084  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.490116  632515 retry.go:31] will retry after 172.629388ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:52.700293  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.700336  632515 retry.go:31] will retry after 263.193431ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:53.001711  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.001752  632515 retry.go:31] will retry after 292.388705ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:53.330899  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.330983  632515 retry.go:31] will retry after 150.876202ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.482528  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:53.503352  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:53.539271  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.539312  632515 retry.go:31] will retry after 204.255046ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:53.780000  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.780033  632515 retry.go:31] will retry after 286.53771ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:54.104096  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:54.104136  632515 retry.go:31] will retry after 342.853351ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:54.484140  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:54.484183  632515 retry.go:31] will retry after 538.071273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:55.059995  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.060097  632515 provision.go:87] duration metric: took 2.949335089s to configureAuth
	W0917 00:50:55.060112  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.060141  632515 retry.go:31] will retry after 111.583987ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.172469  632515 provision.go:84] configureAuth start
	I0917 00:50:55.172579  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:55.192741  632515 provision.go:143] copyHostCerts
	I0917 00:50:55.192784  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:55.192813  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:55.192819  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:55.192888  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:55.192967  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:55.192985  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:55.192991  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:55.193019  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:55.193065  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:55.193081  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:55.193087  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:55.193108  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:55.193172  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:55.387230  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:55.387305  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:55.387354  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:55.406011  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:55.442542  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.442581  632515 retry.go:31] will retry after 197.893115ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:55.677233  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.677268  632515 retry.go:31] will retry after 361.184837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:56.075532  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:56.075571  632515 retry.go:31] will retry after 820.045156ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:56.932557  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:56.932659  632515 retry.go:31] will retry after 314.2147ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.247168  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:57.265865  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:57.302600  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.302632  632515 retry.go:31] will retry after 269.882328ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:57.608658  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.608688  632515 retry.go:31] will retry after 352.472758ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:57.997996  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.998036  632515 retry.go:31] will retry after 611.661766ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:58.646119  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.646221  632515 provision.go:87] duration metric: took 3.473704273s to configureAuth
	W0917 00:50:58.646232  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.646247  632515 retry.go:31] will retry after 196.207718ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.842597  632515 provision.go:84] configureAuth start
	I0917 00:50:58.842696  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:58.861846  632515 provision.go:143] copyHostCerts
	I0917 00:50:58.861891  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:58.861926  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:58.861937  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:58.861993  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:58.862077  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:58.862105  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:58.862112  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:58.862133  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:58.862178  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:58.862195  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:58.862201  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:58.862222  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:58.862306  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:58.925355  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:58.925427  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:58.925471  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:58.944441  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:58.981661  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.981696  632515 retry.go:31] will retry after 357.688867ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:59.376956  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:59.377010  632515 retry.go:31] will retry after 324.136592ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:59.737581  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:59.737618  632515 retry.go:31] will retry after 792.456915ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:00.568086  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:00.568182  632515 retry.go:31] will retry after 279.693773ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:00.848647  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:00.868780  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:00.904736  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:00.904769  632515 retry.go:31] will retry after 139.880253ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:01.081107  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:01.081149  632515 retry.go:31] will retry after 255.7145ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:01.374157  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:01.374191  632515 retry.go:31] will retry after 398.296513ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:01.808876  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:01.808911  632515 retry.go:31] will retry after 429.478006ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:02.276059  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.276173  632515 provision.go:87] duration metric: took 3.433544523s to configureAuth
	W0917 00:51:02.276185  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.276200  632515 retry.go:31] will retry after 269.773489ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.546669  632515 provision.go:84] configureAuth start
	I0917 00:51:02.546785  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:02.565819  632515 provision.go:143] copyHostCerts
	I0917 00:51:02.565857  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:02.565886  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:02.565895  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:02.565955  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:02.566034  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:02.566052  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:02.566059  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:02.566080  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:02.566147  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:02.566169  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:02.566176  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:02.566197  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:02.566287  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:02.707021  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:02.707082  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:02.707122  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:02.725172  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:02.761827  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.761863  632515 retry.go:31] will retry after 155.983276ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:02.954178  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.954219  632515 retry.go:31] will retry after 308.036085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:03.299259  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:03.299304  632515 retry.go:31] will retry after 573.078445ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:03.908424  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:03.908514  632515 retry.go:31] will retry after 231.719058ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.141101  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:04.159661  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:04.196173  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.196204  632515 retry.go:31] will retry after 265.004107ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:04.497255  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.497301  632515 retry.go:31] will retry after 207.19744ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:04.740144  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.740176  632515 retry.go:31] will retry after 616.853014ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:05.394683  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:05.394781  632515 provision.go:87] duration metric: took 2.848059764s to configureAuth
	W0917 00:51:05.394794  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:05.394809  632515 retry.go:31] will retry after 403.451834ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:05.798332  632515 provision.go:84] configureAuth start
	I0917 00:51:05.798469  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:05.816560  632515 provision.go:143] copyHostCerts
	I0917 00:51:05.816600  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:05.816629  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:05.816638  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:05.816690  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:05.816763  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:05.816781  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:05.816785  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:05.816805  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:05.816850  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:05.816869  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:05.816874  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:05.816893  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:05.816942  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:06.333877  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:06.333939  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:06.333978  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:06.355479  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:06.392600  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:06.392641  632515 retry.go:31] will retry after 191.063243ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:06.620279  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:06.620312  632515 retry.go:31] will retry after 258.674944ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:06.916019  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:06.916052  632515 retry.go:31] will retry after 539.137674ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:07.490972  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:07.491012  632515 retry.go:31] will retry after 844.547743ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:08.372738  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.372835  632515 provision.go:87] duration metric: took 2.574473013s to configureAuth
	W0917 00:51:08.372848  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.372865  632515 retry.go:31] will retry after 260.808873ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.634342  632515 provision.go:84] configureAuth start
	I0917 00:51:08.634493  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:08.653239  632515 provision.go:143] copyHostCerts
	I0917 00:51:08.653276  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:08.653309  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:08.653322  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:08.653384  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:08.653565  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:08.653596  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:08.653606  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:08.653648  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:08.653717  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:08.653743  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:08.653752  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:08.653784  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:08.653857  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:08.730992  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:08.731055  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:08.731111  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:08.749527  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:08.785121  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.785151  632515 retry.go:31] will retry after 364.542091ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:09.186219  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:09.186257  632515 retry.go:31] will retry after 547.354514ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:09.771218  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:09.771251  632515 retry.go:31] will retry after 393.114843ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:10.200019  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.200113  632515 retry.go:31] will retry after 322.022298ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.522644  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:10.542542  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:10.578305  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.578341  632515 retry.go:31] will retry after 156.765545ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:10.772114  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.772150  632515 retry.go:31] will retry after 440.395985ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:11.249690  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:11.249723  632515 retry.go:31] will retry after 316.056253ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:11.602837  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:11.602867  632515 retry.go:31] will retry after 793.877155ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:12.433964  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:12.434089  632515 provision.go:87] duration metric: took 3.799715145s to configureAuth
	W0917 00:51:12.434107  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:12.434128  632515 retry.go:31] will retry after 818.896799ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:13.253087  632515 provision.go:84] configureAuth start
	I0917 00:51:13.253220  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:13.271499  632515 provision.go:143] copyHostCerts
	I0917 00:51:13.271537  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:13.271572  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:13.271584  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:13.271654  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:13.271753  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:13.271781  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:13.271791  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:13.271825  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:13.271890  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:13.271917  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:13.271926  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:13.271954  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:13.272026  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:13.421488  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:13.421560  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:13.421600  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:13.441833  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:13.479866  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:13.479906  632515 retry.go:31] will retry after 241.369213ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:13.758753  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:13.758780  632515 retry.go:31] will retry after 421.966909ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:14.217788  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:14.217822  632515 retry.go:31] will retry after 379.069996ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:14.635244  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:14.635284  632515 retry.go:31] will retry after 661.142982ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:15.332869  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:15.332968  632515 provision.go:87] duration metric: took 2.079842358s to configureAuth
	W0917 00:51:15.332981  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:15.332999  632515 retry.go:31] will retry after 1.513437961s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:16.846776  632515 provision.go:84] configureAuth start
	I0917 00:51:16.846873  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:16.865947  632515 provision.go:143] copyHostCerts
	I0917 00:51:16.865995  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:16.866029  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:16.866045  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:16.866110  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:16.866205  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:16.866230  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:16.866239  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:16.866274  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:16.866342  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:16.866366  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:16.866374  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:16.866417  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:16.866504  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:17.191667  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:17.191732  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:17.191770  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:17.210373  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:17.246196  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:17.246231  632515 retry.go:31] will retry after 207.815954ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:17.490362  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:17.490422  632515 retry.go:31] will retry after 477.191676ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:18.004186  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:18.004226  632515 retry.go:31] will retry after 832.321168ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:18.874131  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:18.874224  632515 retry.go:31] will retry after 300.222685ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:19.174745  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:19.194057  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:19.230707  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:19.230745  632515 retry.go:31] will retry after 305.320497ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:19.572710  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:19.572746  632515 retry.go:31] will retry after 473.718736ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:20.084847  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:20.084885  632515 retry.go:31] will retry after 358.504495ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:20.481307  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:20.481448  632515 provision.go:87] duration metric: took 3.634641386s to configureAuth
	W0917 00:51:20.481467  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:20.481484  632515 retry.go:31] will retry after 1.55705326s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.038866  632515 provision.go:84] configureAuth start
	I0917 00:51:22.038992  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:22.057689  632515 provision.go:143] copyHostCerts
	I0917 00:51:22.057748  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:22.057786  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:22.057795  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:22.057874  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:22.057985  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:22.058015  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:22.058021  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:22.058061  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:22.058129  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:22.058155  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:22.058165  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:22.058194  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:22.058268  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:22.240974  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:22.241048  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:22.241090  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:22.259723  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:22.295718  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.295755  632515 retry.go:31] will retry after 368.694319ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:22.701351  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.701413  632515 retry.go:31] will retry after 234.819858ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:22.973378  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.973421  632515 retry.go:31] will retry after 445.662455ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:23.457456  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:23.457559  632515 retry.go:31] will retry after 361.547297ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:23.820268  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:23.839565  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:23.877012  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:23.877051  632515 retry.go:31] will retry after 332.495425ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:24.247791  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:24.247832  632515 retry.go:31] will retry after 480.58286ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:24.766290  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:24.766325  632515 retry.go:31] will retry after 810.307801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:25.613420  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:25.613526  632515 provision.go:87] duration metric: took 3.574631165s to configureAuth
	W0917 00:51:25.613536  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:25.613552  632515 retry.go:31] will retry after 3.493466893s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.108460  632515 provision.go:84] configureAuth start
	I0917 00:51:29.108592  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:29.127839  632515 provision.go:143] copyHostCerts
	I0917 00:51:29.127891  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:29.127920  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:29.127929  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:29.127982  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:29.128065  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:29.128084  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:29.128088  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:29.128123  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:29.128172  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:29.128189  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:29.128195  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:29.128216  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:29.128268  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:29.375095  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:29.375157  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:29.375198  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:29.394447  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:29.430648  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.430684  632515 retry.go:31] will retry after 150.757141ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:29.619124  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.619165  632515 retry.go:31] will retry after 238.164326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:29.895281  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.895319  632515 retry.go:31] will retry after 311.5784ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:30.243059  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:30.243097  632515 retry.go:31] will retry after 958.202731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:31.238646  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:31.238758  632515 provision.go:87] duration metric: took 2.130250058s to configureAuth
	W0917 00:51:31.238771  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:31.238786  632515 retry.go:31] will retry after 2.209510519s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:33.449718  632515 provision.go:84] configureAuth start
	I0917 00:51:33.449826  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:33.468749  632515 provision.go:143] copyHostCerts
	I0917 00:51:33.468799  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:33.468836  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:33.468846  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:33.468918  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:33.469024  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:33.469052  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:33.469062  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:33.469096  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:33.469165  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:33.469190  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:33.469199  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:33.469229  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:33.469357  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:33.985472  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:33.985536  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:33.985573  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:34.004712  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:34.041636  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:34.041667  632515 retry.go:31] will retry after 363.611811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:34.443484  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:34.443524  632515 retry.go:31] will retry after 483.561818ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:34.962924  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:34.962962  632515 retry.go:31] will retry after 639.921331ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:35.642266  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:35.642363  632515 retry.go:31] will retry after 341.867901ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:35.985141  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:36.005149  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:36.042989  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:36.043054  632515 retry.go:31] will retry after 226.013631ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:36.306592  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:36.306631  632515 retry.go:31] will retry after 437.098541ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:36.780356  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:36.780417  632515 retry.go:31] will retry after 807.742041ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:37.625924  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:37.626016  632515 provision.go:87] duration metric: took 4.176272444s to configureAuth
	W0917 00:51:37.626032  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:37.626046  632515 retry.go:31] will retry after 5.783821425s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:43.410502  632515 provision.go:84] configureAuth start
	I0917 00:51:43.410627  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:43.429575  632515 provision.go:143] copyHostCerts
	I0917 00:51:43.429625  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:43.429656  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:43.429668  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:43.429730  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:43.429808  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:43.429829  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:43.429836  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:43.429856  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:43.429899  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:43.429915  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:43.429921  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:43.429938  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:43.429988  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:43.676937  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:43.677016  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:43.677067  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:43.695948  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:43.731552  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:43.731597  632515 retry.go:31] will retry after 371.063976ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:44.139453  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:44.139502  632515 retry.go:31] will retry after 537.52019ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:44.712824  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:44.712860  632515 retry.go:31] will retry after 641.219509ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:45.391773  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.391868  632515 provision.go:87] duration metric: took 1.981318846s to configureAuth
	W0917 00:51:45.391880  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.391895  632515 ubuntu.go:202] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.391904  632515 machine.go:96] duration metric: took 10m57.675312059s to provisionDockerMachine
	I0917 00:51:45.391996  632515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:51:45.392045  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:45.410677  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:45.447453  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.447492  632515 retry.go:31] will retry after 219.806567ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:45.704966  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.704997  632515 retry.go:31] will retry after 253.108883ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:45.994383  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.994455  632515 retry.go:31] will retry after 303.312227ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:46.334082  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.334176  632515 retry.go:31] will retry after 198.442889ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.533637  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:46.552382  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:46.588617  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.588648  632515 retry.go:31] will retry after 246.644284ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:46.871879  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.871908  632515 retry.go:31] will retry after 253.158895ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.160355  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:47.160421  632515 retry.go:31] will retry after 673.328529ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.870783  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.870870  632515 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.870881  632515 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:47.870941  632515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:51:47.870985  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:47.890542  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:47.926837  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:47.926888  632515 retry.go:31] will retry after 191.979643ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:48.155789  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:48.155822  632515 retry.go:31] will retry after 496.333376ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:48.688512  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:48.688545  632515 retry.go:31] will retry after 707.042596ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:49.431589  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.431677  632515 retry.go:31] will retry after 160.419001ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.592966  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:49.613595  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:49.649915  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.649955  632515 retry.go:31] will retry after 205.246327ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:49.891651  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.891686  632515 retry.go:31] will retry after 286.771592ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:50.215702  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:50.215742  632515 retry.go:31] will retry after 813.162049ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065001  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065091  632515 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065109  632515 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:51.065120  632515 fix.go:56] duration metric: took 11m3.67745899s for fixHost
	I0917 00:51:51.065132  632515 start.go:83] releasing machines lock for "ha-671025-m04", held for 11m3.677487819s
	W0917 00:51:51.065151  632515 start.go:714] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065294  632515 out.go:285] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:51.065310  632515 start.go:729] Will try again in 5 seconds ...
	I0917 00:51:56.068712  632515 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:51:56.068825  632515 start.go:364] duration metric: took 72.54µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:51:56.068857  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:51:56.068866  632515 fix.go:54] fixHost starting: m04
	I0917 00:51:56.069146  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:51:56.089434  632515 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Running err=<nil>
	W0917 00:51:56.089467  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:51:56.091315  632515 out.go:252] * Updating the running docker "ha-671025-m04" container ...
	I0917 00:51:56.091363  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:51:56.091481  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:56.111050  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:51:56.111338  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:51:56.111353  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:51:56.147286  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:59.186003  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:02.224065  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:05.261128  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:08.298507  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:11.336655  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:14.374172  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:17.411005  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:20.448133  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:23.484595  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:26.522064  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:29.561855  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:32.599017  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:35.637968  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:38.676013  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:41.715044  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:44.753147  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:47.789890  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:50.827732  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:53.865517  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:56.901256  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:59.937736  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:02.975072  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:06.012018  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:09.050985  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:12.087769  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:15.125608  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:18.163655  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:21.202155  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:24.242132  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:27.279947  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:30.316610  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:33.353948  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:36.392886  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:39.431538  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:42.470338  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:45.508895  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:48.546547  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:51.584487  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:54.622720  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:57.659585  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:00.696914  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:03.734601  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:06.771719  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:09.808339  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:12.845310  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:15.883169  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:18.921190  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:21.957649  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:24.995930  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:28.032738  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:31.069581  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:34.108291  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:37.146962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:40.184957  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:43.225066  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:46.263427  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:49.299798  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:52.337483  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:55.373484  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:58.375202  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:54:58.375243  632515 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:54:58.375323  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:54:58.394506  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:54:58.394819  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:54:58.394837  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m04 && echo "ha-671025-m04" | sudo tee /etc/hostname
	I0917 00:54:58.431690  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:01.471166  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:04.510103  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:07.546274  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:10.582544  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:13.619501  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:16.657477  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:19.695282  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:22.731579  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:25.768876  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:28.806301  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:31.842634  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:34.880236  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:37.918250  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:40.956882  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:43.993751  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:47.031600  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:50.069536  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:53.108071  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:56.146453  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:59.184185  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:02.221185  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:05.258874  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:08.296468  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:11.334381  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:14.373700  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:17.410753  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:20.448244  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:23.487061  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:26.525922  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:29.564962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:32.601712  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:35.638347  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:38.677091  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:41.715243  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:44.753492  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:47.790755  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:50.827016  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:53.864846  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:56.901158  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:59.937763  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:02.975137  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:06.013236  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:09.050745  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:12.087672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:15.126672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:18.162247  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:21.199672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:24.236364  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:27.272510  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:30.308139  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:33.345903  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:36.384679  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:39.422001  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:42.457940  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:45.493949  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:48.530953  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:51.568902  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:54.606598  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:57.643384  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:58:00.644556  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:58:00.644656  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:58:00.664645  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:58:00.664896  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:58:00.664913  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:58:00.701043  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-671025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-671025
helpers_test.go:243: (dbg) docker inspect ha-671025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	        "Created": "2025-09-17T00:28:07.60079298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 632706,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:40:32.010222289Z",
	            "FinishedAt": "2025-09-17T00:40:31.224824882Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/hosts",
	        "LogPath": "/var/lib/docker/containers/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea/843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea-json.log",
	        "Name": "/ha-671025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-671025:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-671025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "843490787febe92c83d546354b0d85a28fd552b8902394552899c94c1c1eb9ea",
	                "LowerDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e05e10e8971e45ab45a3e88ba8ac32ba623e97d4b27aca2b35d9f2dca223b0e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-671025",
	                "Source": "/var/lib/docker/volumes/ha-671025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-671025",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-671025",
	                "name.minikube.sigs.k8s.io": "ha-671025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5443ec6ca04255985d3217d71e2090ed51f83933b7c3d0593f530cea354e5b71",
	            "SandboxKey": "/var/run/docker/netns/5443ec6ca042",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-671025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:25:57:b7:7d:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c0c35d0ccc41812bde7181e33c481a92e6c52d2d90efef6c84bca54a78763ef8",
	                    "EndpointID": "e56bcd480463914c5b93d2ef38aa63e424acdec6d6101792b9ddcab77ca405c0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-671025",
	                        "843490787feb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-671025 -n ha-671025
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 logs -n 25: (1.278116186s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-671025 cp ha-671025-m03:/home/docker/cp-test.txt ha-671025-m04:/home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test_ha-671025-m03_ha-671025-m04.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp testdata/cp-test.txt ha-671025-m04:/home/docker/cp-test.txt                                                            │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile688907033/001/cp-test_ha-671025-m04.txt │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025:/home/docker/cp-test_ha-671025-m04_ha-671025.txt                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025.txt                                                │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m02:/home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m02 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m02.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ cp      │ ha-671025 cp ha-671025-m04:/home/docker/cp-test.txt ha-671025-m03:/home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt              │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ ssh     │ ha-671025 ssh -n ha-671025-m03 sudo cat /home/docker/cp-test_ha-671025-m04_ha-671025-m03.txt                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ node    │ ha-671025 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node start m02 --alsologtostderr -v 5                                                                                     │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:31 UTC │ 17 Sep 25 00:31 UTC │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ stop    │ ha-671025 stop --alsologtostderr -v 5                                                                                               │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │ 17 Sep 25 00:32 UTC │
	│ start   │ ha-671025 start --wait true --alsologtostderr -v 5                                                                                  │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:32 UTC │                     │
	│ node    │ ha-671025 node list --alsologtostderr -v 5                                                                                          │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:39 UTC │                     │
	│ node    │ ha-671025 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:39 UTC │ 17 Sep 25 00:39 UTC │
	│ stop    │ ha-671025 stop --alsologtostderr -v 5                                                                                               │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:40 UTC │ 17 Sep 25 00:40 UTC │
	│ start   │ ha-671025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-671025 │ jenkins │ v1.37.0 │ 17 Sep 25 00:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:40:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:40:31.754550  632515 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:40:31.754860  632515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:40:31.754871  632515 out.go:374] Setting ErrFile to fd 2...
	I0917 00:40:31.754878  632515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:40:31.755104  632515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:40:31.755658  632515 out.go:368] Setting JSON to false
	I0917 00:40:31.756720  632515 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12175,"bootTime":1758057457,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:40:31.756830  632515 start.go:140] virtualization: kvm guest
	I0917 00:40:31.759551  632515 out.go:179] * [ha-671025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:40:31.761385  632515 notify.go:220] Checking for updates...
	I0917 00:40:31.761413  632515 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:40:31.763139  632515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:40:31.765601  632515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:40:31.767780  632515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:40:31.769640  632515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:40:31.771454  632515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:40:31.774248  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:31.775213  632515 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:40:31.802517  632515 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:40:31.802672  632515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:40:31.861960  632515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:40:31.851812235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:40:31.862083  632515 docker.go:318] overlay module found
	I0917 00:40:31.864164  632515 out.go:179] * Using the docker driver based on existing profile
	I0917 00:40:31.865836  632515 start.go:304] selected driver: docker
	I0917 00:40:31.865858  632515 start.go:918] validating driver "docker" against &{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false ku
bevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:40:31.866047  632515 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:40:31.866178  632515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:40:31.926530  632515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-17 00:40:31.916687214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:40:31.927170  632515 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:40:31.927200  632515 cni.go:84] Creating CNI manager for ""
	I0917 00:40:31.927261  632515 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:40:31.927310  632515 start.go:348] cluster config:
	{Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-devic
e-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:40:31.929574  632515 out.go:179] * Starting "ha-671025" primary control-plane node in "ha-671025" cluster
	I0917 00:40:31.931055  632515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:40:31.932656  632515 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:40:31.933886  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:31.933961  632515 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 00:40:31.933976  632515 cache.go:58] Caching tarball of preloaded images
	I0917 00:40:31.934005  632515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:40:31.934112  632515 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:40:31.934126  632515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:40:31.934274  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:31.956303  632515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:40:31.956326  632515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:40:31.956371  632515 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:40:31.956431  632515 start.go:360] acquireMachinesLock for ha-671025: {Name:mk59b9e849284ed1f29625993b42430f4f0355ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:40:31.956502  632515 start.go:364] duration metric: took 47.858µs to acquireMachinesLock for "ha-671025"
	I0917 00:40:31.956526  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:40:31.956534  632515 fix.go:54] fixHost starting: 
	I0917 00:40:31.956740  632515 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:40:31.977595  632515 fix.go:112] recreateIfNeeded on ha-671025: state=Stopped err=<nil>
	W0917 00:40:31.977630  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:40:31.980559  632515 out.go:252] * Restarting existing docker container for "ha-671025" ...
	I0917 00:40:31.980667  632515 cli_runner.go:164] Run: docker start ha-671025
	I0917 00:40:32.235166  632515 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:40:32.255380  632515 kic.go:430] container "ha-671025" state is running.
	I0917 00:40:32.255799  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:40:32.277450  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:32.277765  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:40:32.277858  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:32.298083  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:32.298439  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:32.298458  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:40:32.299071  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53442->127.0.0.1:33203: read: connection reset by peer
	I0917 00:40:35.438793  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:40:35.438835  632515 ubuntu.go:182] provisioning hostname "ha-671025"
	I0917 00:40:35.438907  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:35.458591  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:35.458843  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:35.458861  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025 && echo "ha-671025" | sudo tee /etc/hostname
	I0917 00:40:35.613012  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025
	
	I0917 00:40:35.613101  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:35.638093  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:35.638319  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:35.638336  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:40:35.778694  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:40:35.778724  632515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:40:35.778759  632515 ubuntu.go:190] setting up certificates
	I0917 00:40:35.778776  632515 provision.go:84] configureAuth start
	I0917 00:40:35.778841  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:40:35.797658  632515 provision.go:143] copyHostCerts
	I0917 00:40:35.797701  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:35.797747  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:40:35.797756  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:35.797821  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:40:35.797913  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:35.797931  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:40:35.797937  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:35.797963  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:40:35.798027  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:35.798099  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:40:35.798109  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:35.798135  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:40:35.798202  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025 san=[127.0.0.1 192.168.49.2 ha-671025 localhost minikube]
	I0917 00:40:35.941958  632515 provision.go:177] copyRemoteCerts
	I0917 00:40:35.942023  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:40:35.942062  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:35.960903  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.059750  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:40:36.059811  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0917 00:40:36.087354  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:40:36.087444  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:40:36.114513  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:40:36.114622  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:40:36.143137  632515 provision.go:87] duration metric: took 364.346394ms to configureAuth
	I0917 00:40:36.143166  632515 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:40:36.143370  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:36.143497  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.162826  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:36.163056  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0917 00:40:36.163075  632515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:40:36.461551  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:40:36.461583  632515 machine.go:96] duration metric: took 4.183799542s to provisionDockerMachine
	I0917 00:40:36.461598  632515 start.go:293] postStartSetup for "ha-671025" (driver="docker")
	I0917 00:40:36.461611  632515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:40:36.461696  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:40:36.461774  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.482064  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.583021  632515 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:40:36.587466  632515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:40:36.587499  632515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:40:36.587507  632515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:40:36.587513  632515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:40:36.587525  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:40:36.587590  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:40:36.587663  632515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:40:36.587676  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:40:36.587758  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:40:36.598899  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:36.626439  632515 start.go:296] duration metric: took 164.821052ms for postStartSetup
	I0917 00:40:36.626531  632515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:40:36.626576  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.645992  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.741181  632515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:40:36.746062  632515 fix.go:56] duration metric: took 4.78951996s for fixHost
	I0917 00:40:36.746099  632515 start.go:83] releasing machines lock for "ha-671025", held for 4.789584259s
	I0917 00:40:36.746164  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025
	I0917 00:40:36.764980  632515 ssh_runner.go:195] Run: cat /version.json
	I0917 00:40:36.765007  632515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:40:36.765036  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.765081  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:36.785445  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.786559  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:36.878519  632515 ssh_runner.go:195] Run: systemctl --version
	I0917 00:40:36.953900  632515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:40:37.096904  632515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:40:37.102385  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:37.112665  632515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:40:37.112739  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:37.123238  632515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:40:37.123263  632515 start.go:495] detecting cgroup driver to use...
	I0917 00:40:37.123299  632515 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:40:37.123374  632515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:40:37.138404  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:40:37.151601  632515 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:40:37.151659  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:40:37.166312  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:40:37.179704  632515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:40:37.246162  632515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:40:37.315085  632515 docker.go:234] disabling docker service ...
	I0917 00:40:37.315155  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:40:37.328798  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:40:37.342782  632515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:40:37.410643  632515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:40:37.478475  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:40:37.490788  632515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:40:37.508635  632515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:40:37.508698  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.519575  632515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:40:37.519647  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.531234  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.542040  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.552460  632515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:40:37.563900  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.574568  632515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.585424  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:37.596307  632515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:40:37.605640  632515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:40:37.615373  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:37.676859  632515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:40:37.773658  632515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:40:37.773731  632515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:40:37.777956  632515 start.go:563] Will wait 60s for crictl version
	I0917 00:40:37.778019  632515 ssh_runner.go:195] Run: which crictl
	I0917 00:40:37.781929  632515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:40:37.820023  632515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:40:37.820131  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:37.859582  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:37.900788  632515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:40:37.902302  632515 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:40:37.921379  632515 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:40:37.925935  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:37.938981  632515 kubeadm.go:875] updating cluster {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:40:37.939161  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:37.939220  632515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:40:37.984187  632515 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:40:37.984208  632515 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:40:37.984253  632515 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:40:38.022220  632515 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:40:38.022247  632515 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:40:38.022258  632515 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:40:38.022383  632515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:40:38.022487  632515 ssh_runner.go:195] Run: crio config
	I0917 00:40:38.068795  632515 cni.go:84] Creating CNI manager for ""
	I0917 00:40:38.068823  632515 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0917 00:40:38.068838  632515 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:40:38.068868  632515 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671025 NodeName:ha-671025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:40:38.069022  632515 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-671025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:40:38.069055  632515 kube-vip.go:115] generating kube-vip config ...
	I0917 00:40:38.069110  632515 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:40:38.083310  632515 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:40:38.083451  632515 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:40:38.083504  632515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:40:38.093822  632515 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:40:38.093953  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0917 00:40:38.104139  632515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0917 00:40:38.123612  632515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:40:38.143029  632515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0917 00:40:38.162204  632515 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:40:38.181804  632515 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:40:38.185628  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:38.198248  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:38.267211  632515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:40:38.295366  632515 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.2
	I0917 00:40:38.295402  632515 certs.go:194] generating shared ca certs ...
	I0917 00:40:38.295431  632515 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:38.295582  632515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:40:38.295626  632515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:40:38.295634  632515 certs.go:256] generating profile certs ...
	I0917 00:40:38.295702  632515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:40:38.295725  632515 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9
	I0917 00:40:38.295740  632515 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0917 00:40:38.563189  632515 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9 ...
	I0917 00:40:38.563223  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9: {Name:mk2fd2bd0b9f2426e27af5b187b55653c79ecc2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:38.563427  632515 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9 ...
	I0917 00:40:38.563441  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9: {Name:mkc6ea84046c9c5b881ab3e36ceca4d0c3a5f2ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:38.563513  632515 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt.798d15c9 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt
	I0917 00:40:38.563662  632515 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.798d15c9 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key
	I0917 00:40:38.563795  632515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:40:38.563812  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:40:38.563827  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:40:38.563838  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:40:38.563851  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:40:38.563861  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:40:38.563871  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:40:38.563883  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:40:38.563893  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:40:38.563944  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:40:38.563973  632515 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:40:38.563983  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:40:38.564006  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:40:38.564037  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:40:38.564057  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:40:38.564097  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:38.564123  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:38.564136  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.564148  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.564676  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:40:38.592418  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:40:38.618464  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:40:38.645113  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:40:38.671903  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:40:38.699466  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:40:38.726719  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:40:38.754384  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:40:38.781770  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:40:38.810665  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:40:38.839255  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:40:38.870949  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:40:38.892273  632515 ssh_runner.go:195] Run: openssl version
	I0917 00:40:38.900199  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:40:38.915450  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.920310  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.920382  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:40:38.928936  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:40:38.942961  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:40:38.957865  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.962632  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.962710  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:40:38.974433  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:40:38.989008  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:40:39.003069  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:39.008507  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:39.008598  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:39.020277  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:40:39.033876  632515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:40:39.039917  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:40:39.050424  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:40:39.061076  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:40:39.071182  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:40:39.081231  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:40:39.091810  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:40:39.101435  632515 kubeadm.go:392] StartCluster: {Name:ha-671025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:40:39.101589  632515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:40:39.101651  632515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:40:39.144935  632515 cri.go:89] found id: "881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b"
	I0917 00:40:39.144965  632515 cri.go:89] found id: "939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726"
	I0917 00:40:39.144971  632515 cri.go:89] found id: "b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7"
	I0917 00:40:39.144976  632515 cri.go:89] found id: "5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08"
	I0917 00:40:39.144980  632515 cri.go:89] found id: "ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595"
	I0917 00:40:39.144985  632515 cri.go:89] found id: ""
	I0917 00:40:39.145041  632515 ssh_runner.go:195] Run: sudo runc list -f json
	I0917 00:40:39.166330  632515 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08","pid":899,"status":"running","bundle":"/run/containers/storage/overlay-containers/5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08/userdata","rootfs":"/var/lib/containers/storage/overlay/f8daf2d0fc83f27d37f2c17a1131a37f9eb1d0219a84c2ec4a51c2ac9aba19f0/merged","created":"2025-09-17T00:40:38.956554866Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports
\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.882883895Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system
\",\"io.kubernetes.pod.uid\":\"74a9cbd6392d4b9acfdd053de2761cb8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-671025_74a9cbd6392d4b9acfdd053de2761cb8/kube-scheduler/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f8daf2d0fc83f27d37f2c17a1131a37f9eb1d0219a84c2ec4a51c2ac9aba19f0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3c6cfaaaada7cc47e15cae134822a33798e226c87792acbb4b511bcbabc03648/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3c6cfaaaada7cc47e15cae134822a33798e226c87792acbb4b511bcbabc03648","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-671025_kube-system_74a9cbd6392d4b9acfdd053de2761cb8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.Std
inOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/74a9cbd6392d4b9acfdd053de2761cb8/containers/kube-scheduler/0e31211d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.hash":"74a9cbd6392d4b9acfdd053de2761cb8","kubernetes.io/config.seen":"2025-09-17T00:40:38.373088265Z","kubernetes.io/config.source":"file","org.syste
md.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b","pid":936,"status":"running","bundle":"/run/containers/storage/overlay-containers/881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b/userdata","rootfs":"/var/lib/containers/storage/overlay/316dd2f04dce7007a8c676808441c6f78dd40563fa3164de617ad905ac862962/merged","created":"2025-09-17T00:40:38.986326516Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernet
es.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.918534597Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels
":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b5ccb738eb1160dc60c2973028d04964\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-671025_b5ccb738eb1160dc60c2973028d04964/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/316dd2f04dce7007a8c676808441c6f78dd40563fa3164de617ad905ac862962/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-671025_kube-system_b5ccb738eb1160dc60c2973028d04964_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/663c2fdb6a7826331bebf88dacb2edcc2793bd89ca89f8f2a2c6ee3dddcd6b65/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"663c2fdb6a7826331bebf88dacb2edcc2793bd89ca89f8f2a2c6ee3dddcd6b65","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-67
1025_kube-system_b5ccb738eb1160dc60c2973028d04964_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/containers/kube-apiserver/adb66b20\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b5ccb738eb1160dc60c2973028d04964/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readon
ly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b5ccb738eb1160dc60c2973028d04964","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"b5ccb738eb1160dc60c2973028d04964","kubernetes.io/config.seen":"2025-09-17T00:40:38.373084752Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec
":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726","pid":939,"status":"running","bundle":"/run/containers/storage/overlay-containers/939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726/userdata","rootfs":"/var/lib/containers/storage/overlay/8e80cca246b9d31c933201bacd6f475a4ce666ebf86e3918745046c21f32df01/merged","created":"2025-09-17T00:40:38.986209649Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"na
me\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.907118664Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ha-671025\",\"io.kubernetes.pod.namespace\"
:\"kube-system\",\"io.kubernetes.pod.uid\":\"8d1e0f98935496199c8e8278a2410d09\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-671025_8d1e0f98935496199c8e8278a2410d09/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8e80cca246b9d31c933201bacd6f475a4ce666ebf86e3918745046c21f32df01/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/cc5007dc0bc114337324c055cc351afd2237bc1485ad54a0117fa858e4782b09/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cc5007dc0bc114337324c055cc351afd2237bc1485ad54a0117fa858e4782b09","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-671025_kube-system_8d1e0f98935496199c8e8278a2410d09_0","io.kubernetes.cri-o.SeccompProfileP
ath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/containers/kube-controller-manager/efc1d7f6\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8d1e0f98935496199c8e8278a2410d09/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}
,{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.hash":"8d1e0f98935496199c8e8278a2410d09","kubernetes.io/config.seen":"2025-09-17T00:40:38.3730
86693Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7","pid":907,"status":"running","bundle":"/run/containers/storage/overlay-containers/b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7/userdata","rootfs":"/var/lib/containers/storage/overlay/ab934f84f0d64a133c76c0de44ec21738c90709d51eb7ff8657b8db8c417152a/merged","created":"2025-09-17T00:40:38.95393389Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotatio
ns":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.895294523Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a7817082b8b3b4ebaac6b1c6cc40fe3e\"}","io.kubernetes.cri-o.
LogPath":"/var/log/pods/kube-system_kube-vip-ha-671025_a7817082b8b3b4ebaac6b1c6cc40fe3e/kube-vip/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ab934f84f0d64a133c76c0de44ec21738c90709d51eb7ff8657b8db8c417152a/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f79cd4d6fce11a79d448a28321ed754e18f98392ba5fbdafeaf8bb1113a45b8a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f79cd4d6fce11a79d448a28321ed754e18f98392ba5fbdafeaf8bb1113a45b8a","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-671025_kube-system_a7817082b8b3b4ebaac6b1c6cc40fe3e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_p
ath\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a7817082b8b3b4ebaac6b1c6cc40fe3e/containers/kube-vip/8832e24d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.hash":"a7817082b8b3b4ebaac6b1c6cc40fe3e","kubernetes.io/config.seen":"2025-09-17T00:40:38.373089533Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true
","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595","pid":916,"status":"running","bundle":"/run/containers/storage/overlay-containers/ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595/userdata","rootfs":"/var/lib/containers/storage/overlay/7a6096809a9404429b3828fc8b58acae83c06741219b335c3b2b949a4220367e/merged","created":"2025-09-17T00:40:38.971421633Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.
ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-17T00:40:38.881807971Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-671025\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\
":\"629bf94aa8286a4aae957269fae7c79b\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-671025_629bf94aa8286a4aae957269fae7c79b/etcd/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7a6096809a9404429b3828fc8b58acae83c06741219b335c3b2b949a4220367e/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/adb3a22e9933ceddcc041c13f2cc2f963b5a59432e8bbcdfc2ff14814e4b87b0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"adb3a22e9933ceddcc041c13f2cc2f963b5a59432e8bbcdfc2ff14814e4b87b0","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-671025_kube-system_629bf94aa8286a4aae957269fae7c79b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"co
ntainer_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/629bf94aa8286a4aae957269fae7c79b/containers/etcd/e9d2259a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-671025","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"629bf94aa8286a4aae957269fae7c79b","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"629bf94aa8286a4aae957
269fae7c79b","kubernetes.io/config.seen":"2025-09-17T00:40:38.373079434Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0917 00:40:39.166778  632515 cri.go:126] list returned 5 containers
	I0917 00:40:39.166798  632515 cri.go:129] container: {ID:5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08 Status:running}
	I0917 00:40:39.166821  632515 cri.go:135] skipping {5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08 running}: state = "running", want "paused"
	I0917 00:40:39.166836  632515 cri.go:129] container: {ID:881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b Status:running}
	I0917 00:40:39.166845  632515 cri.go:135] skipping {881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b running}: state = "running", want "paused"
	I0917 00:40:39.166854  632515 cri.go:129] container: {ID:939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726 Status:running}
	I0917 00:40:39.166860  632515 cri.go:135] skipping {939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726 running}: state = "running", want "paused"
	I0917 00:40:39.166869  632515 cri.go:129] container: {ID:b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7 Status:running}
	I0917 00:40:39.166874  632515 cri.go:135] skipping {b2732d3309fd11f5c1f39c1c412186079466128c5a6794923ea9143e7ab1def7 running}: state = "running", want "paused"
	I0917 00:40:39.166883  632515 cri.go:129] container: {ID:ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595 Status:running}
	I0917 00:40:39.166889  632515 cri.go:135] skipping {ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595 running}: state = "running", want "paused"
	I0917 00:40:39.166941  632515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:40:39.178023  632515 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 00:40:39.178070  632515 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 00:40:39.178118  632515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 00:40:39.188385  632515 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:40:39.188902  632515 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671025" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:40:39.189037  632515 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-517646/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671025" cluster setting kubeconfig missing "ha-671025" context setting]
	I0917 00:40:39.189368  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:39.190094  632515 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 00:40:39.190673  632515 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 00:40:39.190691  632515 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 00:40:39.190697  632515 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 00:40:39.190702  632515 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 00:40:39.190709  632515 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 00:40:39.190740  632515 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0917 00:40:39.191174  632515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 00:40:39.200970  632515 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0917 00:40:39.200996  632515 kubeadm.go:593] duration metric: took 22.91871ms to restartPrimaryControlPlane
	I0917 00:40:39.201006  632515 kubeadm.go:394] duration metric: took 99.589549ms to StartCluster
	I0917 00:40:39.201027  632515 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:39.201103  632515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:40:39.201826  632515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:39.202080  632515 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:40:39.202107  632515 start.go:241] waiting for startup goroutines ...
	I0917 00:40:39.202116  632515 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 00:40:39.202366  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:39.205103  632515 out.go:179] * Enabled addons: 
	I0917 00:40:39.206259  632515 addons.go:514] duration metric: took 4.134791ms for enable addons: enabled=[]
	I0917 00:40:39.206295  632515 start.go:246] waiting for cluster config update ...
	I0917 00:40:39.206310  632515 start.go:255] writing updated cluster config ...
	I0917 00:40:39.208316  632515 out.go:203] 
	I0917 00:40:39.209913  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:39.210037  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:39.211628  632515 out.go:179] * Starting "ha-671025-m02" control-plane node in "ha-671025" cluster
	I0917 00:40:39.212849  632515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:40:39.214412  632515 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:40:39.215588  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:39.215619  632515 cache.go:58] Caching tarball of preloaded images
	I0917 00:40:39.215696  632515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:40:39.215727  632515 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:40:39.215739  632515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:40:39.215894  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:39.240756  632515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:40:39.240793  632515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:40:39.240819  632515 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:40:39.240852  632515 start.go:360] acquireMachinesLock for ha-671025-m02: {Name:mk1465985964f60af81adbf10dbe0a21c7eb20d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:40:39.240925  632515 start.go:364] duration metric: took 51.172µs to acquireMachinesLock for "ha-671025-m02"
	I0917 00:40:39.240952  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:40:39.240974  632515 fix.go:54] fixHost starting: m02
	I0917 00:40:39.241212  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:40:39.262782  632515 fix.go:112] recreateIfNeeded on ha-671025-m02: state=Stopped err=<nil>
	W0917 00:40:39.262826  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:40:39.264705  632515 out.go:252] * Restarting existing docker container for "ha-671025-m02" ...
	I0917 00:40:39.264774  632515 cli_runner.go:164] Run: docker start ha-671025-m02
	I0917 00:40:39.525550  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:40:39.548227  632515 kic.go:430] container "ha-671025-m02" state is running.
	I0917 00:40:39.548819  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:40:39.573516  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:39.573761  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:40:39.573819  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:39.595101  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:39.595449  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:39.595465  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:40:39.596146  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57494->127.0.0.1:33208: read: connection reset by peer
	I0917 00:40:42.744302  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:40:42.744341  632515 ubuntu.go:182] provisioning hostname "ha-671025-m02"
	I0917 00:40:42.744440  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:42.772727  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:42.773041  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:42.773066  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m02 && echo "ha-671025-m02" | sudo tee /etc/hostname
	I0917 00:40:42.966840  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671025-m02
	
	I0917 00:40:42.966938  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:42.999313  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:42.999622  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:42.999654  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:40:43.166450  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:40:43.166486  632515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:40:43.166512  632515 ubuntu.go:190] setting up certificates
	I0917 00:40:43.166528  632515 provision.go:84] configureAuth start
	I0917 00:40:43.166598  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:40:43.191986  632515 provision.go:143] copyHostCerts
	I0917 00:40:43.192036  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:43.192077  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:40:43.192090  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:40:43.192181  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:40:43.192299  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:43.192337  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:40:43.192347  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:40:43.192424  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:40:43.192541  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:43.192561  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:40:43.192566  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:40:43.192607  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:40:43.192708  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m02 san=[127.0.0.1 192.168.49.3 ha-671025-m02 localhost minikube]
	I0917 00:40:43.430833  632515 provision.go:177] copyRemoteCerts
	I0917 00:40:43.430920  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:40:43.430997  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:43.459960  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:43.568596  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0917 00:40:43.568675  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 00:40:43.595799  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0917 00:40:43.595866  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:40:43.622160  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0917 00:40:43.622224  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:40:43.650486  632515 provision.go:87] duration metric: took 483.938346ms to configureAuth
	I0917 00:40:43.650520  632515 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:40:43.650749  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:43.650849  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:43.669815  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:43.670087  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33208 <nil> <nil>}
	I0917 00:40:43.670108  632515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:40:44.121666  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:40:44.121696  632515 machine.go:96] duration metric: took 4.547919987s to provisionDockerMachine
	I0917 00:40:44.121708  632515 start.go:293] postStartSetup for "ha-671025-m02" (driver="docker")
	I0917 00:40:44.121722  632515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:40:44.121789  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:40:44.121842  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.144239  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.248012  632515 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:40:44.252106  632515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:40:44.252137  632515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:40:44.252145  632515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:40:44.252153  632515 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:40:44.252168  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 00:40:44.252230  632515 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 00:40:44.252311  632515 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 00:40:44.252321  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /etc/ssl/certs/5212732.pem
	I0917 00:40:44.252424  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 00:40:44.262184  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:44.291527  632515 start.go:296] duration metric: took 169.798795ms for postStartSetup
	I0917 00:40:44.291632  632515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:40:44.291683  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.312473  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.406975  632515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:40:44.411956  632515 fix.go:56] duration metric: took 5.170985164s for fixHost
	I0917 00:40:44.411984  632515 start.go:83] releasing machines lock for "ha-671025-m02", held for 5.171045077s
	I0917 00:40:44.412067  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m02
	I0917 00:40:44.433399  632515 out.go:179] * Found network options:
	I0917 00:40:44.434772  632515 out.go:179]   - NO_PROXY=192.168.49.2
	W0917 00:40:44.436118  632515 proxy.go:120] fail to check proxy env: Error ip not in block
	W0917 00:40:44.436158  632515 proxy.go:120] fail to check proxy env: Error ip not in block
	I0917 00:40:44.436226  632515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:40:44.436275  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.436331  632515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:40:44.436542  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m02
	I0917 00:40:44.456132  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.456175  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33208 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m02/id_rsa Username:docker}
	I0917 00:40:44.691367  632515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:40:44.696760  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:44.706855  632515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:40:44.706939  632515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:40:44.717107  632515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:40:44.717138  632515 start.go:495] detecting cgroup driver to use...
	I0917 00:40:44.717177  632515 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 00:40:44.717226  632515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:40:44.731567  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:40:44.745939  632515 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:40:44.745990  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:40:44.763319  632515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:40:44.776506  632515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:40:44.894007  632515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:40:45.038909  632515 docker.go:234] disabling docker service ...
	I0917 00:40:45.038982  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:40:45.053638  632515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:40:45.066893  632515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:40:45.205587  632515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:40:45.364462  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:40:45.383628  632515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:40:45.405497  632515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:40:45.405564  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.416825  632515 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 00:40:45.416919  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.428902  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.443620  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.455563  632515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:40:45.466416  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.478152  632515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.490283  632515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:40:45.502127  632515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:40:45.512246  632515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:40:45.521843  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:45.640461  632515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:40:45.896355  632515 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:40:45.896473  632515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:40:45.900956  632515 start.go:563] Will wait 60s for crictl version
	I0917 00:40:45.901026  632515 ssh_runner.go:195] Run: which crictl
	I0917 00:40:45.905222  632515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:40:45.942130  632515 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:40:45.942214  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:45.980992  632515 ssh_runner.go:195] Run: crio --version
	I0917 00:40:46.023154  632515 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:40:46.024799  632515 out.go:179]   - env NO_PROXY=192.168.49.2
	I0917 00:40:46.026246  632515 cli_runner.go:164] Run: docker network inspect ha-671025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:40:46.045491  632515 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:40:46.049717  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:46.061967  632515 mustload.go:65] Loading cluster: ha-671025
	I0917 00:40:46.062188  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:46.062431  632515 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:40:46.080226  632515 host.go:66] Checking if "ha-671025" exists ...
	I0917 00:40:46.080512  632515 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025 for IP: 192.168.49.3
	I0917 00:40:46.080525  632515 certs.go:194] generating shared ca certs ...
	I0917 00:40:46.080543  632515 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:40:46.080697  632515 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 00:40:46.080772  632515 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 00:40:46.080790  632515 certs.go:256] generating profile certs ...
	I0917 00:40:46.080890  632515 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key
	I0917 00:40:46.080964  632515 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key.d800739c
	I0917 00:40:46.081013  632515 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key
	I0917 00:40:46.081029  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 00:40:46.081049  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0917 00:40:46.081088  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 00:40:46.081108  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 00:40:46.081127  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 00:40:46.081145  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 00:40:46.081164  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 00:40:46.081180  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 00:40:46.081259  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 00:40:46.081301  632515 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 00:40:46.081315  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:40:46.081346  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 00:40:46.081376  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:40:46.081438  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 00:40:46.081493  632515 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 00:40:46.081540  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.081561  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem -> /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.081587  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.081702  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025
	I0917 00:40:46.101025  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025/id_rsa Username:docker}
	I0917 00:40:46.189723  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0917 00:40:46.194282  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0917 00:40:46.215250  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0917 00:40:46.220905  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0917 00:40:46.238548  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0917 00:40:46.243187  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0917 00:40:46.259431  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0917 00:40:46.263838  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0917 00:40:46.278404  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0917 00:40:46.282305  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0917 00:40:46.297261  632515 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0917 00:40:46.301896  632515 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0917 00:40:46.316846  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:40:46.346007  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:40:46.376478  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:40:46.405429  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 00:40:46.433262  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 00:40:46.462010  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:40:46.490142  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:40:46.518271  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:40:46.546483  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:40:46.574948  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 00:40:46.603480  632515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 00:40:46.632648  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0917 00:40:46.654796  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0917 00:40:46.676468  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0917 00:40:46.697823  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0917 00:40:46.718611  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0917 00:40:46.740412  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0917 00:40:46.763172  632515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0917 00:40:46.784790  632515 ssh_runner.go:195] Run: openssl version
	I0917 00:40:46.791348  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:40:46.802517  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.806431  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.806479  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:40:46.813628  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:40:46.824091  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 00:40:46.835716  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.839866  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.839925  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 00:40:46.847187  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 00:40:46.857010  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 00:40:46.867839  632515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.871864  632515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.871928  632515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 00:40:46.879300  632515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:40:46.889305  632515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:40:46.893181  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:40:46.900268  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:40:46.907385  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:40:46.914194  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:40:46.921136  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:40:46.927929  632515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:40:46.934672  632515 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0917 00:40:46.934768  632515 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-671025-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-671025 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:40:46.934793  632515 kube-vip.go:115] generating kube-vip config ...
	I0917 00:40:46.934825  632515 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0917 00:40:46.949032  632515 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:40:46.949125  632515 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0917 00:40:46.949189  632515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:40:46.958935  632515 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:40:46.958997  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0917 00:40:46.969133  632515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:40:46.989052  632515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:40:47.009277  632515 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0917 00:40:47.030373  632515 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0917 00:40:47.034630  632515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:40:47.046734  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:47.153601  632515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:40:47.166587  632515 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:40:47.166924  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:47.169412  632515 out.go:179] * Verifying Kubernetes components...
	I0917 00:40:47.170627  632515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:40:47.282243  632515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:40:47.295175  632515 kapi.go:59] client config for ha-671025: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/client.key", CAFile:"/home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0917 00:40:47.295250  632515 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0917 00:40:47.295529  632515 node_ready.go:35] waiting up to 6m0s for node "ha-671025-m02" to be "Ready" ...
	I0917 00:40:47.304206  632515 node_ready.go:49] node "ha-671025-m02" is "Ready"
	I0917 00:40:47.304237  632515 node_ready.go:38] duration metric: took 8.673255ms for node "ha-671025-m02" to be "Ready" ...
	I0917 00:40:47.304254  632515 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:40:47.304311  632515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:40:47.316591  632515 api_server.go:72] duration metric: took 149.952703ms to wait for apiserver process to appear ...
	I0917 00:40:47.316615  632515 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:40:47.316635  632515 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:40:47.322489  632515 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:40:47.323523  632515 api_server.go:141] control plane version: v1.34.0
	I0917 00:40:47.323550  632515 api_server.go:131] duration metric: took 6.928789ms to wait for apiserver health ...
	I0917 00:40:47.323558  632515 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:40:47.329799  632515 system_pods.go:59] 24 kube-system pods found
	I0917 00:40:47.329836  632515 system_pods.go:61] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.329843  632515 system_pods.go:61] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.329851  632515 system_pods.go:61] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.329857  632515 system_pods.go:61] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.329861  632515 system_pods.go:61] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:40:47.329864  632515 system_pods.go:61] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:40:47.329868  632515 system_pods.go:61] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:40:47.329874  632515 system_pods.go:61] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:40:47.329879  632515 system_pods.go:61] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.329888  632515 system_pods.go:61] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.329893  632515 system_pods.go:61] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:40:47.329901  632515 system_pods.go:61] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.329908  632515 system_pods.go:61] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.329912  632515 system_pods.go:61] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:40:47.329918  632515 system_pods.go:61] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:40:47.329922  632515 system_pods.go:61] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:40:47.329925  632515 system_pods.go:61] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:40:47.329930  632515 system_pods.go:61] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.329937  632515 system_pods.go:61] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.329941  632515 system_pods.go:61] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:40:47.329946  632515 system_pods.go:61] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:40:47.329949  632515 system_pods.go:61] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:40:47.329952  632515 system_pods.go:61] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:40:47.329954  632515 system_pods.go:61] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:40:47.329960  632515 system_pods.go:74] duration metric: took 6.396975ms to wait for pod list to return data ...
	I0917 00:40:47.329969  632515 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:40:47.333216  632515 default_sa.go:45] found service account: "default"
	I0917 00:40:47.333237  632515 default_sa.go:55] duration metric: took 3.262813ms for default service account to be created ...
	I0917 00:40:47.333246  632515 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:40:47.338819  632515 system_pods.go:86] 24 kube-system pods found
	I0917 00:40:47.338855  632515 system_pods.go:89] "coredns-66bc5c9577-mqh24" [98a1c881-a129-4c32-9b46-dd6f5cbe5281] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.338863  632515 system_pods.go:89] "coredns-66bc5c9577-vfj56" [f3d26661-ca38-4e11-b9c1-ed434a28cdf6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:40:47.338871  632515 system_pods.go:89] "etcd-ha-671025" [2477808a-7111-4385-9e26-cbf17330051f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.338877  632515 system_pods.go:89] "etcd-ha-671025-m02" [8ea66d09-97d1-4b07-b112-bd651485996b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 00:40:47.338881  632515 system_pods.go:89] "etcd-ha-671025-m03" [1a8eb7af-9aaa-44e2-840e-717a60a71c69] Running
	I0917 00:40:47.338885  632515 system_pods.go:89] "kindnet-7scsq" [4fa1fd3e-cd2a-4e0a-beb8-9c495fa182ed] Running
	I0917 00:40:47.338888  632515 system_pods.go:89] "kindnet-9w6f7" [8aefd42c-944b-4962-8bdf-c34166e2c56e] Running
	I0917 00:40:47.338891  632515 system_pods.go:89] "kindnet-9zvhz" [6247c758-ee8c-40db-b577-561bfc484bc1] Running
	I0917 00:40:47.338896  632515 system_pods.go:89] "kube-apiserver-ha-671025" [1dbd5b35-f97c-46d5-bb61-40eff5fc3bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.338903  632515 system_pods.go:89] "kube-apiserver-ha-671025-m02" [47299bb4-151f-4d77-b9a2-fd1376bb4cfb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 00:40:47.338910  632515 system_pods.go:89] "kube-apiserver-ha-671025-m03" [2695f2ac-415a-430e-9dea-0f61c68455a5] Running
	I0917 00:40:47.338916  632515 system_pods.go:89] "kube-controller-manager-ha-671025" [7e80ec0d-3738-41dc-b83a-11f17f0b9861] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.338921  632515 system_pods.go:89] "kube-controller-manager-ha-671025-m02" [a396e08b-d40b-4aa2-a10b-60d93f6b0960] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 00:40:47.338928  632515 system_pods.go:89] "kube-controller-manager-ha-671025-m03" [b293923a-51db-4149-b921-590dd6e48d0f] Running
	I0917 00:40:47.338932  632515 system_pods.go:89] "kube-proxy-4k8lz" [23c8e412-493e-463b-b4ce-0b500bd50d72] Running
	I0917 00:40:47.338936  632515 system_pods.go:89] "kube-proxy-f58dt" [452eeb3b-1f3c-4a3a-8d5e-c67097b88369] Running
	I0917 00:40:47.338939  632515 system_pods.go:89] "kube-proxy-q96zd" [9fe8a312-c296-4c84-9c30-5e578c24e82e] Running
	I0917 00:40:47.338946  632515 system_pods.go:89] "kube-scheduler-ha-671025" [ef02aa67-b74e-403e-b8aa-5d557a59062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.338951  632515 system_pods.go:89] "kube-scheduler-ha-671025-m02" [4f8880a0-89e0-439a-b4fe-898ef42b8329] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 00:40:47.338956  632515 system_pods.go:89] "kube-scheduler-ha-671025-m03" [f5f9ef23-ce13-4729-b96a-1e64e03b941a] Running
	I0917 00:40:47.338959  632515 system_pods.go:89] "kube-vip-ha-671025" [bcb7c84b-932c-463e-a710-1d665741e70a] Running
	I0917 00:40:47.338962  632515 system_pods.go:89] "kube-vip-ha-671025-m02" [d98df3d2-3054-4e6f-823c-08a347b61834] Running
	I0917 00:40:47.338965  632515 system_pods.go:89] "kube-vip-ha-671025-m03" [40ba489c-2026-4b5a-8626-f4d881bf5949] Running
	I0917 00:40:47.338968  632515 system_pods.go:89] "storage-provisioner" [b6e26f82-6f5f-47b0-a0bf-5ed9e54aa6ed] Running
	I0917 00:40:47.338975  632515 system_pods.go:126] duration metric: took 5.723447ms to wait for k8s-apps to be running ...
	I0917 00:40:47.338984  632515 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:40:47.339032  632515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:40:47.352522  632515 system_svc.go:56] duration metric: took 13.515878ms WaitForService to wait for kubelet
	I0917 00:40:47.352562  632515 kubeadm.go:578] duration metric: took 185.927121ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:40:47.352585  632515 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:40:47.356328  632515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:40:47.356359  632515 node_conditions.go:123] node cpu capacity is 8
	I0917 00:40:47.356373  632515 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0917 00:40:47.356379  632515 node_conditions.go:123] node cpu capacity is 8
	I0917 00:40:47.356385  632515 node_conditions.go:105] duration metric: took 3.794845ms to run NodePressure ...
	I0917 00:40:47.356411  632515 start.go:241] waiting for startup goroutines ...
	I0917 00:40:47.356443  632515 start.go:255] writing updated cluster config ...
	I0917 00:40:47.358857  632515 out.go:203] 
	I0917 00:40:47.360340  632515 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:47.360490  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:47.362332  632515 out.go:179] * Starting "ha-671025-m04" worker node in "ha-671025" cluster
	I0917 00:40:47.363542  632515 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:40:47.364625  632515 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:40:47.365563  632515 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:40:47.365591  632515 cache.go:58] Caching tarball of preloaded images
	I0917 00:40:47.365656  632515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:40:47.365708  632515 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 00:40:47.365722  632515 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:40:47.365844  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:47.387506  632515 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:40:47.387525  632515 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:40:47.387542  632515 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:40:47.387573  632515 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:40:47.387634  632515 start.go:364] duration metric: took 39.357µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:40:47.387655  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:40:47.387662  632515 fix.go:54] fixHost starting: m04
	I0917 00:40:47.387922  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:40:47.405966  632515 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Stopped err=<nil>
	W0917 00:40:47.406001  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:40:47.407782  632515 out.go:252] * Restarting existing docker container for "ha-671025-m04" ...
	I0917 00:40:47.407855  632515 cli_runner.go:164] Run: docker start ha-671025-m04
	I0917 00:40:47.672894  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:40:47.693808  632515 kic.go:430] container "ha-671025-m04" state is running.
	I0917 00:40:47.694266  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:40:47.716290  632515 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/ha-671025/config.json ...
	I0917 00:40:47.716578  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:40:47.716642  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:40:47.738438  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:40:47.738710  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:40:47.738727  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:40:47.739696  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35420->127.0.0.1:33213: read: connection reset by peer
	I0917 00:40:50.777847  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:40:53.815804  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:40:56.852418  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:40:59.889985  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:02.927878  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:05.965835  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:09.003962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:12.040765  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:15.078604  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:18.115199  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:21.153105  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:24.191790  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:27.228233  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:30.265101  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:33.302557  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:36.340672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:39.378046  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:42.415716  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:45.454188  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:48.490805  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:51.528343  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:54.567745  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:41:57.604876  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:00.641962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:03.679512  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:06.716775  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:09.753499  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:12.792026  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:15.830034  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:18.867326  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:21.904726  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:24.942818  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:27.980856  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:31.017825  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:34.057439  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:37.095319  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:40.132348  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:43.169264  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:46.207659  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:49.243571  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:52.280712  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:55.318863  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:42:58.355739  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:01.394676  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:04.432495  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:07.470482  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:10.507715  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:13.545305  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:16.581942  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:19.619242  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:22.656519  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:25.694601  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:28.732296  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:31.770235  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:34.807509  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:37.844968  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:40.882655  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:43.920508  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:46.958281  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:49.959450  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:43:49.959523  632515 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:43:49.959627  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:43:49.979209  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:43:49.979506  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:43:49.979526  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m04 && echo "ha-671025-m04" | sudo tee /etc/hostname
	I0917 00:43:50.016366  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:53.053427  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:56.091065  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:43:59.129444  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:02.166433  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:05.205459  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:08.242233  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:11.281526  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:14.322545  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:17.359681  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:20.396907  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:23.434430  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:26.472879  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:29.509629  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:32.546351  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:35.585714  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:38.624441  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:41.662155  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:44.702330  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:47.739809  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:50.777911  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:53.815936  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:56.853055  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:44:59.890033  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:02.927223  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:05.964844  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:09.003164  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:12.040941  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:15.078544  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:18.117352  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:21.153477  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:24.192119  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:27.229944  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:30.267815  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:33.305445  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:36.341603  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:39.379044  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:42.415843  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:45.454832  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:48.491973  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:51.529496  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:54.567726  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:45:57.605234  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:00.642229  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:03.679834  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:06.717361  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:09.754552  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:12.790977  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:15.830180  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:18.867082  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:21.904971  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:24.943011  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:27.981089  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:31.019069  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:34.057510  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:37.094826  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:40.131199  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:43.168012  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:46.205355  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:49.241870  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:52.242138  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:46:52.242268  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:46:52.264751  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:46:52.265071  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:46:52.265100  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:46:52.301891  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:55.339328  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:46:58.376434  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:01.415546  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:04.453672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:07.490248  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:10.527693  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:13.564020  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:16.602874  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:19.639113  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:22.676268  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:25.714627  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:28.752642  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:31.790017  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:34.828809  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:37.865379  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:40.901989  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:43.940340  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:46.977650  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:50.014834  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:53.055009  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:56.092109  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:47:59.129867  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:02.166598  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:05.205606  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:08.242967  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:11.278983  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:14.317476  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:17.354034  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:20.391033  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:23.428672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:26.466615  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:29.504285  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:32.541494  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:35.579576  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:38.616731  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:41.657886  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:44.695765  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:47.733775  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:50.771259  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:53.809696  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:56.847643  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:48:59.883590  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:02.921110  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:05.959275  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:08.996121  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:12.032654  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:15.071174  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:18.107540  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:21.145559  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:24.184308  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:27.222577  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:30.259673  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:33.298643  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:36.336797  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:39.373689  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:42.413133  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:45.452088  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:48.490720  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:51.529779  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:54.531505  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:49:54.531573  632515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 00:49:54.531626  632515 ubuntu.go:190] setting up certificates
	I0917 00:49:54.531647  632515 provision.go:84] configureAuth start
	I0917 00:49:54.531739  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:49:54.551339  632515 provision.go:143] copyHostCerts
	I0917 00:49:54.551429  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:54.551478  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:49:54.551489  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:54.551576  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:49:54.551695  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:54.551716  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:49:54.551724  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:54.551770  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:49:54.551842  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:54.551862  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:49:54.551870  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:54.551909  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:49:54.551987  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:49:55.075317  632515 provision.go:177] copyRemoteCerts
	I0917 00:49:55.075413  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:49:55.075466  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:49:55.094562  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:49:55.131095  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:55.131145  632515 retry.go:31] will retry after 181.743857ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:55.349302  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:55.349337  632515 retry.go:31] will retry after 327.982556ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:55.713462  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:55.713496  632515 retry.go:31] will retry after 348.016843ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:56.097960  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.097998  632515 retry.go:31] will retry after 483.850248ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:56.619626  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.619759  632515 retry.go:31] will retry after 144.183744ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.765023  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:49:56.784089  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:49:56.821621  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:56.821666  632515 retry.go:31] will retry after 278.594161ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:57.137033  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:57.137068  632515 retry.go:31] will retry after 428.68953ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:57.603586  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:57.603622  632515 retry.go:31] will retry after 735.913432ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:58.377129  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.377217  632515 provision.go:87] duration metric: took 3.845563473s to configureAuth
	W0917 00:49:58.377227  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.377241  632515 retry.go:31] will retry after 106.534µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.378407  632515 provision.go:84] configureAuth start
	I0917 00:49:58.378491  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:49:58.396865  632515 provision.go:143] copyHostCerts
	I0917 00:49:58.396914  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:58.396954  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:49:58.396964  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:49:58.397051  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:49:58.397179  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:58.397209  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:49:58.397215  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:49:58.397247  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:49:58.397342  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:58.397378  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:49:58.397384  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:49:58.397427  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:49:58.397525  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:49:58.711543  632515 provision.go:177] copyRemoteCerts
	I0917 00:49:58.711617  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:49:58.711656  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:49:58.732044  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:49:58.768196  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:58.768239  632515 retry.go:31] will retry after 272.740384ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:59.077518  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:59.077563  632515 retry.go:31] will retry after 353.940506ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:49:59.468351  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:49:59.468419  632515 retry.go:31] will retry after 790.243256ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:00.295054  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.295156  632515 retry.go:31] will retry after 230.050538ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.525535  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:00.546341  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:00.583328  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.583366  632515 retry.go:31] will retry after 350.741503ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:00.970853  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:00.970893  632515 retry.go:31] will retry after 300.695459ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:01.309524  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.309557  632515 retry.go:31] will retry after 595.595625ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:01.943226  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.943326  632515 provision.go:87] duration metric: took 3.564901302s to configureAuth
	W0917 00:50:01.943340  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.943370  632515 retry.go:31] will retry after 82.092µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:01.944551  632515 provision.go:84] configureAuth start
	I0917 00:50:01.944631  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:01.964075  632515 provision.go:143] copyHostCerts
	I0917 00:50:01.964128  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:01.964160  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:01.964174  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:01.964250  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:01.964378  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:01.964422  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:01.964429  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:01.964463  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:01.964551  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:01.964576  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:01.964584  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:01.964616  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:01.964708  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:02.030303  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:02.030365  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:02.030421  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:02.050138  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:02.086170  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:02.086227  632515 retry.go:31] will retry after 299.253149ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:02.422896  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:02.422925  632515 retry.go:31] will retry after 210.347632ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:02.671216  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:02.671255  632515 retry.go:31] will retry after 814.790488ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:03.521857  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.521954  632515 retry.go:31] will retry after 176.199116ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.698338  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:03.716938  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:03.753247  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.753288  632515 retry.go:31] will retry after 155.234551ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:03.945915  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:03.945949  632515 retry.go:31] will retry after 523.325975ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:04.505459  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:04.505496  632515 retry.go:31] will retry after 744.659161ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:05.286909  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:05.287029  632515 provision.go:87] duration metric: took 3.342456692s to configureAuth
	W0917 00:50:05.287040  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:05.287056  632515 retry.go:31] will retry after 174.81µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:05.288151  632515 provision.go:84] configureAuth start
	I0917 00:50:05.288248  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:05.307557  632515 provision.go:143] copyHostCerts
	I0917 00:50:05.307595  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:05.307622  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:05.307631  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:05.307690  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:05.307771  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:05.307789  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:05.307793  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:05.307813  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:05.307910  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:05.307938  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:05.307948  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:05.307977  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:05.308069  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:06.124049  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:06.124110  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:06.124147  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:06.142960  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:06.179541  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:06.179577  632515 retry.go:31] will retry after 253.641842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:06.470694  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:06.470724  632515 retry.go:31] will retry after 361.06837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:06.869140  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:06.869183  632515 retry.go:31] will retry after 748.337326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:07.654341  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:07.654488  632515 retry.go:31] will retry after 302.218349ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:07.957049  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:07.975836  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:08.012335  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:08.012373  632515 retry.go:31] will retry after 343.545558ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:08.393469  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:08.393509  632515 retry.go:31] will retry after 292.709088ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:08.722910  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:08.722952  632515 retry.go:31] will retry after 782.245002ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:09.542622  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:09.542713  632515 provision.go:87] duration metric: took 4.254541048s to configureAuth
	W0917 00:50:09.542725  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:09.542740  632515 retry.go:31] will retry after 363.465µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:09.543896  632515 provision.go:84] configureAuth start
	I0917 00:50:09.543987  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:09.563254  632515 provision.go:143] copyHostCerts
	I0917 00:50:09.563298  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:09.563342  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:09.563350  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:09.563447  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:09.563550  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:09.563569  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:09.563574  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:09.563599  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:09.563658  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:09.563679  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:09.563682  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:09.563701  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:09.563770  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:10.100678  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:10.100740  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:10.100776  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:10.120637  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:10.159175  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:10.159210  632515 retry.go:31] will retry after 316.977532ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:10.512855  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:10.512910  632515 retry.go:31] will retry after 206.602874ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:10.757756  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:10.757791  632515 retry.go:31] will retry after 388.38065ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:11.183258  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:11.183293  632515 retry.go:31] will retry after 551.25599ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:11.772010  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:11.772120  632515 retry.go:31] will retry after 288.087276ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.060552  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:12.079987  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:12.117424  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.117463  632515 retry.go:31] will retry after 255.354599ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:12.409744  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.409776  632515 retry.go:31] will retry after 522.962893ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:12.970294  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:12.970350  632515 retry.go:31] will retry after 438.867721ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:13.446548  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.446669  632515 provision.go:87] duration metric: took 3.902748058s to configureAuth
	W0917 00:50:13.446683  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.446698  632515 retry.go:31] will retry after 468.526µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.447846  632515 provision.go:84] configureAuth start
	I0917 00:50:13.447950  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:13.467144  632515 provision.go:143] copyHostCerts
	I0917 00:50:13.467203  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:13.467237  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:13.467253  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:13.467326  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:13.467466  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:13.467488  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:13.467493  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:13.467517  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:13.467581  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:13.467598  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:13.467604  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:13.467624  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:13.467732  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:13.870974  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:13.871042  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:13.871085  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:13.889812  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:13.926496  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:13.926545  632515 retry.go:31] will retry after 267.505033ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:14.231498  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:14.231534  632515 retry.go:31] will retry after 522.902976ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:14.791171  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:14.791205  632515 retry.go:31] will retry after 739.615653ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:15.567533  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:15.567636  632515 retry.go:31] will retry after 232.900985ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:15.801150  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:15.819485  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:15.855915  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:15.855948  632515 retry.go:31] will retry after 279.418591ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:16.173138  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:16.173186  632515 retry.go:31] will retry after 265.737704ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:16.477676  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:16.477709  632515 retry.go:31] will retry after 702.578423ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:17.216952  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.217096  632515 provision.go:87] duration metric: took 3.769225472s to configureAuth
	W0917 00:50:17.217109  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.217124  632515 retry.go:31] will retry after 917.898µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.218282  632515 provision.go:84] configureAuth start
	I0917 00:50:17.218375  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:17.237626  632515 provision.go:143] copyHostCerts
	I0917 00:50:17.237669  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:17.237705  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:17.237716  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:17.237768  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:17.237859  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:17.237878  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:17.237882  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:17.237911  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:17.237968  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:17.237991  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:17.237996  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:17.238025  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:17.238106  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:17.295733  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:17.295811  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:17.295864  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:17.315495  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:17.351525  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.351562  632515 retry.go:31] will retry after 278.460935ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:17.666932  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:17.666969  632515 retry.go:31] will retry after 353.734866ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:18.057920  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:18.057958  632515 retry.go:31] will retry after 706.602278ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:18.802736  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:18.802814  632515 retry.go:31] will retry after 187.543888ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:18.991326  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:19.010215  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:19.046936  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:19.046968  632515 retry.go:31] will retry after 181.982762ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:19.265359  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:19.265415  632515 retry.go:31] will retry after 426.438339ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:19.728051  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:19.728089  632515 retry.go:31] will retry after 494.698101ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:20.260104  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.260143  632515 retry.go:31] will retry after 546.342664ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:20.843132  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.843234  632515 provision.go:87] duration metric: took 3.624926933s to configureAuth
	W0917 00:50:20.843248  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.843260  632515 retry.go:31] will retry after 614.342µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:20.844436  632515 provision.go:84] configureAuth start
	I0917 00:50:20.844517  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:20.863058  632515 provision.go:143] copyHostCerts
	I0917 00:50:20.863099  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:20.863129  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:20.863138  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:20.863192  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:20.863270  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:20.863287  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:20.863293  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:20.863326  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:20.863373  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:20.863408  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:20.863418  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:20.863443  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:20.863501  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:21.547579  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:21.547640  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:21.547689  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:21.567099  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:21.603139  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:21.603173  632515 retry.go:31] will retry after 354.905304ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:21.994839  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:21.994871  632515 retry.go:31] will retry after 230.336886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:22.262896  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:22.262933  632515 retry.go:31] will retry after 470.238343ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:22.769438  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:22.769478  632515 retry.go:31] will retry after 775.977166ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:23.582257  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.582369  632515 provision.go:87] duration metric: took 2.737910901s to configureAuth
	W0917 00:50:23.582382  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.582428  632515 retry.go:31] will retry after 1.384293ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.584647  632515 provision.go:84] configureAuth start
	I0917 00:50:23.584721  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:23.604649  632515 provision.go:143] copyHostCerts
	I0917 00:50:23.604691  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:23.604726  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:23.604738  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:23.604803  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:23.604906  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:23.604928  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:23.604937  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:23.604972  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:23.605082  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:23.605108  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:23.605117  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:23.605186  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:23.605289  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:23.929770  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:23.929834  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:23.929882  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:23.950551  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:23.986773  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:23.986827  632515 retry.go:31] will retry after 191.045816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:24.215077  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:24.215159  632515 retry.go:31] will retry after 367.654178ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:24.619976  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:24.620013  632515 retry.go:31] will retry after 667.754811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:25.324805  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.324901  632515 retry.go:31] will retry after 226.841471ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.552443  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:25.572474  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:25.608798  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.608828  632515 retry.go:31] will retry after 261.920271ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:25.907792  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:25.907829  632515 retry.go:31] will retry after 224.736719ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:26.169079  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.169236  632515 retry.go:31] will retry after 469.609314ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:26.676774  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.676905  632515 provision.go:87] duration metric: took 3.092235264s to configureAuth
	W0917 00:50:26.676919  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.676935  632515 retry.go:31] will retry after 1.322684ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.679211  632515 provision.go:84] configureAuth start
	I0917 00:50:26.679326  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:26.699028  632515 provision.go:143] copyHostCerts
	I0917 00:50:26.699074  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:26.699113  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:26.699122  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:26.699179  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:26.699263  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:26.699281  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:26.699287  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:26.699322  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:26.699435  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:26.699458  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:26.699464  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:26.699486  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:26.699541  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:26.883507  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:26.883571  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:26.883610  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:26.901909  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:26.938113  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:26.938146  632515 retry.go:31] will retry after 134.491037ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:27.109871  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:27.109912  632515 retry.go:31] will retry after 526.197976ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:27.673521  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:27.673555  632515 retry.go:31] will retry after 585.726632ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:28.297059  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.297095  632515 retry.go:31] will retry after 528.356861ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:28.863599  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.863707  632515 provision.go:87] duration metric: took 2.184468569s to configureAuth
	W0917 00:50:28.863723  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.863738  632515 retry.go:31] will retry after 5.073321ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:28.868924  632515 provision.go:84] configureAuth start
	I0917 00:50:28.869023  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:28.887951  632515 provision.go:143] copyHostCerts
	I0917 00:50:28.887998  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:28.888029  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:28.888039  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:28.888105  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:28.888201  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:28.888223  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:28.888233  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:28.888267  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:28.888349  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:28.888374  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:28.888382  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:28.888425  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:28.888506  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:28.973999  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:28.974061  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:28.974105  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:28.993851  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:29.030823  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:29.030857  632515 retry.go:31] will retry after 289.215993ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:29.356949  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:29.356981  632515 retry.go:31] will retry after 495.318582ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:29.888829  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:29.888863  632515 retry.go:31] will retry after 628.473012ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:30.554178  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:30.554268  632515 retry.go:31] will retry after 195.67279ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:30.750597  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:30.768976  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:30.805780  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:30.805817  632515 retry.go:31] will retry after 162.662176ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:31.005739  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:31.005782  632515 retry.go:31] will retry after 501.550591ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:31.543556  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:31.543585  632515 retry.go:31] will retry after 654.512353ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:32.234876  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.234982  632515 provision.go:87] duration metric: took 3.366029278s to configureAuth
	W0917 00:50:32.234996  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.235011  632515 retry.go:31] will retry after 4.423458ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.240271  632515 provision.go:84] configureAuth start
	I0917 00:50:32.240382  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:32.260973  632515 provision.go:143] copyHostCerts
	I0917 00:50:32.261040  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:32.261072  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:32.261082  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:32.261135  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:32.261251  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:32.261275  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:32.261280  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:32.261305  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:32.261350  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:32.261373  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:32.261380  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:32.261427  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:32.261492  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:32.576811  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:32.576898  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:32.576946  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:32.594876  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:32.631272  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.631304  632515 retry.go:31] will retry after 159.534115ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:32.828830  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:32.828873  632515 retry.go:31] will retry after 525.910165ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:33.391768  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:33.391811  632515 retry.go:31] will retry after 487.290507ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:33.916025  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:33.916061  632515 retry.go:31] will retry after 426.666789ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:34.380994  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.381113  632515 provision.go:87] duration metric: took 2.140814482s to configureAuth
	W0917 00:50:34.381127  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.381151  632515 retry.go:31] will retry after 4.999439ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.386421  632515 provision.go:84] configureAuth start
	I0917 00:50:34.386521  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:34.405489  632515 provision.go:143] copyHostCerts
	I0917 00:50:34.405536  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:34.405566  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:34.405584  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:34.405640  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:34.405718  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:34.405736  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:34.405743  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:34.405762  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:34.405816  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:34.405834  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:34.405838  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:34.405858  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:34.405912  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:34.645184  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:34.645253  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:34.645292  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:34.664718  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:34.700962  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.701003  632515 retry.go:31] will retry after 219.116738ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:34.956072  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:34.956145  632515 retry.go:31] will retry after 526.047595ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:35.518345  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:35.518380  632515 retry.go:31] will retry after 696.668276ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:36.252208  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.252303  632515 retry.go:31] will retry after 330.708312ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.583965  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:36.602741  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:36.638646  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.638688  632515 retry.go:31] will retry after 278.757425ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:36.954355  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:36.954410  632515 retry.go:31] will retry after 226.711803ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:37.220262  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:37.220310  632515 retry.go:31] will retry after 749.165652ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:38.006557  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.006589  632515 retry.go:31] will retry after 482.349257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:38.526080  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.526178  632515 provision.go:87] duration metric: took 4.139727646s to configureAuth
	W0917 00:50:38.526188  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.526212  632515 retry.go:31] will retry after 19.037245ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:38.545416  632515 provision.go:84] configureAuth start
	I0917 00:50:38.545541  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:38.566128  632515 provision.go:143] copyHostCerts
	I0917 00:50:38.566171  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:38.566202  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:38.566208  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:38.566271  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:38.566349  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:38.566368  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:38.566372  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:38.566416  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:38.566482  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:38.566502  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:38.566507  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:38.566526  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:38.566593  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:38.991903  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:38.991971  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:38.992013  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:39.011347  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:39.050038  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:39.050073  632515 retry.go:31] will retry after 337.988535ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:39.425023  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:39.425081  632515 retry.go:31] will retry after 500.505537ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:39.962290  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:39.962331  632515 retry.go:31] will retry after 503.789672ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:40.503420  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:40.503518  632515 retry.go:31] will retry after 333.367854ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:40.837065  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:40.856774  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:40.894359  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:40.894416  632515 retry.go:31] will retry after 222.689334ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:41.154246  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:41.154287  632515 retry.go:31] will retry after 282.589186ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:41.474233  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:41.474271  632515 retry.go:31] will retry after 651.602213ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:42.162200  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.162235  632515 retry.go:31] will retry after 552.404672ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:42.752279  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.752412  632515 provision.go:87] duration metric: took 4.206938108s to configureAuth
	W0917 00:50:42.752426  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.752443  632515 retry.go:31] will retry after 18.126258ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.771710  632515 provision.go:84] configureAuth start
	I0917 00:50:42.771828  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:42.790293  632515 provision.go:143] copyHostCerts
	I0917 00:50:42.790346  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:42.790378  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:42.790398  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:42.790463  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:42.790563  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:42.790598  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:42.790608  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:42.790681  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:42.790749  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:42.790775  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:42.790787  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:42.790819  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:42.791233  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:42.868607  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:42.868675  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:42.868711  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:42.888168  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:42.925190  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:42.925226  632515 retry.go:31] will retry after 290.318239ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:43.251563  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:43.251597  632515 retry.go:31] will retry after 468.433406ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:43.756730  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:43.756769  632515 retry.go:31] will retry after 614.415077ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:44.408758  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:44.408845  632515 retry.go:31] will retry after 201.201149ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:44.610310  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:44.629682  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:44.666478  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:44.666513  632515 retry.go:31] will retry after 335.575333ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:45.039687  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:45.039722  632515 retry.go:31] will retry after 325.495793ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:45.402130  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:45.402167  632515 retry.go:31] will retry after 665.343507ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:46.105384  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.105501  632515 provision.go:87] duration metric: took 3.333748619s to configureAuth
	W0917 00:50:46.105514  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.105530  632515 retry.go:31] will retry after 26.362188ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.132797  632515 provision.go:84] configureAuth start
	I0917 00:50:46.132913  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:46.151606  632515 provision.go:143] copyHostCerts
	I0917 00:50:46.151650  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:46.151683  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:46.151693  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:46.151749  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:46.151834  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:46.151854  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:46.151859  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:46.151879  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:46.151925  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:46.151941  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:46.151947  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:46.151965  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:46.152015  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:46.678008  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:46.678077  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:46.678115  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:46.697254  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:46.733438  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:46.733466  632515 retry.go:31] will retry after 278.597162ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:47.050972  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:47.051022  632515 retry.go:31] will retry after 188.61489ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:47.276353  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:47.276422  632515 retry.go:31] will retry after 668.98273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:47.984108  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:47.984145  632515 retry.go:31] will retry after 606.369731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:48.628443  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.628556  632515 provision.go:87] duration metric: took 2.495723391s to configureAuth
	W0917 00:50:48.628570  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.628587  632515 retry.go:31] will retry after 64.390783ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.693858  632515 provision.go:84] configureAuth start
	I0917 00:50:48.693987  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:48.713843  632515 provision.go:143] copyHostCerts
	I0917 00:50:48.713892  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:48.713929  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:48.713945  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:48.714004  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:48.714086  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:48.714107  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:48.714114  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:48.714135  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:48.714184  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:48.714201  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:48.714204  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:48.714222  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:48.714276  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:48.895697  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:48.895760  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:48.895811  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:48.914428  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:48.950712  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:48.950744  632515 retry.go:31] will retry after 178.741801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:49.166254  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:49.166296  632515 retry.go:31] will retry after 501.407422ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:49.703996  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:49.704033  632515 retry.go:31] will retry after 817.867259ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:50.560617  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:50.560706  632515 retry.go:31] will retry after 312.243953ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:50.873217  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:50.891443  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:50.926995  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:50.927027  632515 retry.go:31] will retry after 156.916989ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:51.120257  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:51.120290  632515 retry.go:31] will retry after 438.534255ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:51.596576  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:51.596617  632515 retry.go:31] will retry after 414.358837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:52.048272  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.048406  632515 provision.go:87] duration metric: took 3.354481141s to configureAuth
	W0917 00:50:52.048419  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.048435  632515 retry.go:31] will retry after 61.191343ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.110719  632515 provision.go:84] configureAuth start
	I0917 00:50:52.110826  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:52.128699  632515 provision.go:143] copyHostCerts
	I0917 00:50:52.128752  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:52.128784  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:52.128796  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:52.128877  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:52.128987  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:52.129058  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:52.129066  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:52.129093  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:52.129152  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:52.129170  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:52.129177  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:52.129196  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:52.129259  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:52.433622  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:52.433690  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:52.433739  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:52.453878  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:52.490084  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.490116  632515 retry.go:31] will retry after 172.629388ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:52.700293  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:52.700336  632515 retry.go:31] will retry after 263.193431ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:53.001711  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.001752  632515 retry.go:31] will retry after 292.388705ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:53.330899  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.330983  632515 retry.go:31] will retry after 150.876202ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.482528  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:53.503352  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:53.539271  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.539312  632515 retry.go:31] will retry after 204.255046ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:53.780000  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:53.780033  632515 retry.go:31] will retry after 286.53771ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:54.104096  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:54.104136  632515 retry.go:31] will retry after 342.853351ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:54.484140  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:54.484183  632515 retry.go:31] will retry after 538.071273ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:55.059995  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.060097  632515 provision.go:87] duration metric: took 2.949335089s to configureAuth
	W0917 00:50:55.060112  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.060141  632515 retry.go:31] will retry after 111.583987ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.172469  632515 provision.go:84] configureAuth start
	I0917 00:50:55.172579  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:55.192741  632515 provision.go:143] copyHostCerts
	I0917 00:50:55.192784  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:55.192813  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:55.192819  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:55.192888  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:55.192967  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:55.192985  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:55.192991  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:55.193019  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:55.193065  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:55.193081  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:55.193087  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:55.193108  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:55.193172  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:55.387230  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:55.387305  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:55.387354  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:55.406011  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:55.442542  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.442581  632515 retry.go:31] will retry after 197.893115ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:55.677233  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:55.677268  632515 retry.go:31] will retry after 361.184837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:56.075532  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:56.075571  632515 retry.go:31] will retry after 820.045156ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:56.932557  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:56.932659  632515 retry.go:31] will retry after 314.2147ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.247168  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:57.265865  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:57.302600  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.302632  632515 retry.go:31] will retry after 269.882328ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:57.608658  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.608688  632515 retry.go:31] will retry after 352.472758ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:57.997996  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:57.998036  632515 retry.go:31] will retry after 611.661766ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:58.646119  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.646221  632515 provision.go:87] duration metric: took 3.473704273s to configureAuth
	W0917 00:50:58.646232  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.646247  632515 retry.go:31] will retry after 196.207718ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.842597  632515 provision.go:84] configureAuth start
	I0917 00:50:58.842696  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:50:58.861846  632515 provision.go:143] copyHostCerts
	I0917 00:50:58.861891  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:58.861926  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:50:58.861937  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:50:58.861993  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:50:58.862077  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:58.862105  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:50:58.862112  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:50:58.862133  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:50:58.862178  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:58.862195  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:50:58.862201  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:50:58.862222  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:50:58.862306  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:50:58.925355  632515 provision.go:177] copyRemoteCerts
	I0917 00:50:58.925427  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:50:58.925471  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:50:58.944441  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:50:58.981661  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:58.981696  632515 retry.go:31] will retry after 357.688867ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:59.376956  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:59.377010  632515 retry.go:31] will retry after 324.136592ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:50:59.737581  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:50:59.737618  632515 retry.go:31] will retry after 792.456915ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:00.568086  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:00.568182  632515 retry.go:31] will retry after 279.693773ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:00.848647  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:00.868780  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:00.904736  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:00.904769  632515 retry.go:31] will retry after 139.880253ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:01.081107  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:01.081149  632515 retry.go:31] will retry after 255.7145ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:01.374157  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:01.374191  632515 retry.go:31] will retry after 398.296513ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:01.808876  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:01.808911  632515 retry.go:31] will retry after 429.478006ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:02.276059  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.276173  632515 provision.go:87] duration metric: took 3.433544523s to configureAuth
	W0917 00:51:02.276185  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.276200  632515 retry.go:31] will retry after 269.773489ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.546669  632515 provision.go:84] configureAuth start
	I0917 00:51:02.546785  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:02.565819  632515 provision.go:143] copyHostCerts
	I0917 00:51:02.565857  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:02.565886  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:02.565895  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:02.565955  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:02.566034  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:02.566052  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:02.566059  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:02.566080  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:02.566147  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:02.566169  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:02.566176  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:02.566197  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:02.566287  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:02.707021  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:02.707082  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:02.707122  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:02.725172  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:02.761827  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.761863  632515 retry.go:31] will retry after 155.983276ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:02.954178  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:02.954219  632515 retry.go:31] will retry after 308.036085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:03.299259  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:03.299304  632515 retry.go:31] will retry after 573.078445ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:03.908424  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:03.908514  632515 retry.go:31] will retry after 231.719058ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.141101  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:04.159661  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:04.196173  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.196204  632515 retry.go:31] will retry after 265.004107ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:04.497255  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.497301  632515 retry.go:31] will retry after 207.19744ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:04.740144  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:04.740176  632515 retry.go:31] will retry after 616.853014ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:05.394683  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:05.394781  632515 provision.go:87] duration metric: took 2.848059764s to configureAuth
	W0917 00:51:05.394794  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:05.394809  632515 retry.go:31] will retry after 403.451834ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:05.798332  632515 provision.go:84] configureAuth start
	I0917 00:51:05.798469  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:05.816560  632515 provision.go:143] copyHostCerts
	I0917 00:51:05.816600  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:05.816629  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:05.816638  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:05.816690  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:05.816763  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:05.816781  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:05.816785  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:05.816805  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:05.816850  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:05.816869  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:05.816874  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:05.816893  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:05.816942  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:06.333877  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:06.333939  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:06.333978  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:06.355479  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:06.392600  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:06.392641  632515 retry.go:31] will retry after 191.063243ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:06.620279  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:06.620312  632515 retry.go:31] will retry after 258.674944ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:06.916019  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:06.916052  632515 retry.go:31] will retry after 539.137674ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:07.490972  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:07.491012  632515 retry.go:31] will retry after 844.547743ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:08.372738  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.372835  632515 provision.go:87] duration metric: took 2.574473013s to configureAuth
	W0917 00:51:08.372848  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.372865  632515 retry.go:31] will retry after 260.808873ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.634342  632515 provision.go:84] configureAuth start
	I0917 00:51:08.634493  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:08.653239  632515 provision.go:143] copyHostCerts
	I0917 00:51:08.653276  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:08.653309  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:08.653322  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:08.653384  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:08.653565  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:08.653596  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:08.653606  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:08.653648  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:08.653717  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:08.653743  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:08.653752  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:08.653784  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:08.653857  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:08.730992  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:08.731055  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:08.731111  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:08.749527  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:08.785121  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:08.785151  632515 retry.go:31] will retry after 364.542091ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:09.186219  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:09.186257  632515 retry.go:31] will retry after 547.354514ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:09.771218  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:09.771251  632515 retry.go:31] will retry after 393.114843ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:10.200019  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.200113  632515 retry.go:31] will retry after 322.022298ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.522644  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:10.542542  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:10.578305  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.578341  632515 retry.go:31] will retry after 156.765545ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:10.772114  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:10.772150  632515 retry.go:31] will retry after 440.395985ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:11.249690  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:11.249723  632515 retry.go:31] will retry after 316.056253ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:11.602837  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:11.602867  632515 retry.go:31] will retry after 793.877155ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:12.433964  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:12.434089  632515 provision.go:87] duration metric: took 3.799715145s to configureAuth
	W0917 00:51:12.434107  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:12.434128  632515 retry.go:31] will retry after 818.896799ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:13.253087  632515 provision.go:84] configureAuth start
	I0917 00:51:13.253220  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:13.271499  632515 provision.go:143] copyHostCerts
	I0917 00:51:13.271537  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:13.271572  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:13.271584  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:13.271654  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:13.271753  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:13.271781  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:13.271791  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:13.271825  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:13.271890  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:13.271917  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:13.271926  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:13.271954  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:13.272026  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:13.421488  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:13.421560  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:13.421600  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:13.441833  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:13.479866  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:13.479906  632515 retry.go:31] will retry after 241.369213ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:13.758753  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:13.758780  632515 retry.go:31] will retry after 421.966909ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:14.217788  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:14.217822  632515 retry.go:31] will retry after 379.069996ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:14.635244  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:14.635284  632515 retry.go:31] will retry after 661.142982ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:15.332869  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:15.332968  632515 provision.go:87] duration metric: took 2.079842358s to configureAuth
	W0917 00:51:15.332981  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:15.332999  632515 retry.go:31] will retry after 1.513437961s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:16.846776  632515 provision.go:84] configureAuth start
	I0917 00:51:16.846873  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:16.865947  632515 provision.go:143] copyHostCerts
	I0917 00:51:16.865995  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:16.866029  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:16.866045  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:16.866110  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:16.866205  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:16.866230  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:16.866239  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:16.866274  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:16.866342  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:16.866366  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:16.866374  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:16.866417  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:16.866504  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:17.191667  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:17.191732  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:17.191770  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:17.210373  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:17.246196  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:17.246231  632515 retry.go:31] will retry after 207.815954ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:17.490362  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:17.490422  632515 retry.go:31] will retry after 477.191676ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:18.004186  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:18.004226  632515 retry.go:31] will retry after 832.321168ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:18.874131  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:18.874224  632515 retry.go:31] will retry after 300.222685ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:19.174745  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:19.194057  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:19.230707  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:19.230745  632515 retry.go:31] will retry after 305.320497ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:19.572710  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:19.572746  632515 retry.go:31] will retry after 473.718736ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:20.084847  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:20.084885  632515 retry.go:31] will retry after 358.504495ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:20.481307  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:20.481448  632515 provision.go:87] duration metric: took 3.634641386s to configureAuth
	W0917 00:51:20.481467  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:20.481484  632515 retry.go:31] will retry after 1.55705326s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.038866  632515 provision.go:84] configureAuth start
	I0917 00:51:22.038992  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:22.057689  632515 provision.go:143] copyHostCerts
	I0917 00:51:22.057748  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:22.057786  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:22.057795  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:22.057874  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:22.057985  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:22.058015  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:22.058021  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:22.058061  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:22.058129  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:22.058155  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:22.058165  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:22.058194  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:22.058268  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:22.240974  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:22.241048  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:22.241090  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:22.259723  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:22.295718  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.295755  632515 retry.go:31] will retry after 368.694319ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:22.701351  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.701413  632515 retry.go:31] will retry after 234.819858ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:22.973378  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:22.973421  632515 retry.go:31] will retry after 445.662455ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:23.457456  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:23.457559  632515 retry.go:31] will retry after 361.547297ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:23.820268  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:23.839565  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:23.877012  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:23.877051  632515 retry.go:31] will retry after 332.495425ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:24.247791  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:24.247832  632515 retry.go:31] will retry after 480.58286ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:24.766290  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:24.766325  632515 retry.go:31] will retry after 810.307801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:25.613420  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:25.613526  632515 provision.go:87] duration metric: took 3.574631165s to configureAuth
	W0917 00:51:25.613536  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:25.613552  632515 retry.go:31] will retry after 3.493466893s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.108460  632515 provision.go:84] configureAuth start
	I0917 00:51:29.108592  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:29.127839  632515 provision.go:143] copyHostCerts
	I0917 00:51:29.127891  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:29.127920  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:29.127929  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:29.127982  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:29.128065  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:29.128084  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:29.128088  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:29.128123  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:29.128172  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:29.128189  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:29.128195  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:29.128216  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:29.128268  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:29.375095  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:29.375157  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:29.375198  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:29.394447  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:29.430648  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.430684  632515 retry.go:31] will retry after 150.757141ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:29.619124  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.619165  632515 retry.go:31] will retry after 238.164326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:29.895281  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:29.895319  632515 retry.go:31] will retry after 311.5784ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:30.243059  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:30.243097  632515 retry.go:31] will retry after 958.202731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:31.238646  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:31.238758  632515 provision.go:87] duration metric: took 2.130250058s to configureAuth
	W0917 00:51:31.238771  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:31.238786  632515 retry.go:31] will retry after 2.209510519s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:33.449718  632515 provision.go:84] configureAuth start
	I0917 00:51:33.449826  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:33.468749  632515 provision.go:143] copyHostCerts
	I0917 00:51:33.468799  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:33.468836  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:33.468846  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:33.468918  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:33.469024  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:33.469052  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:33.469062  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:33.469096  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:33.469165  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:33.469190  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:33.469199  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:33.469229  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:33.469357  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:33.985472  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:33.985536  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:33.985573  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:34.004712  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:34.041636  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:34.041667  632515 retry.go:31] will retry after 363.611811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:34.443484  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:34.443524  632515 retry.go:31] will retry after 483.561818ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:34.962924  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:34.962962  632515 retry.go:31] will retry after 639.921331ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:35.642266  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:35.642363  632515 retry.go:31] will retry after 341.867901ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:35.985141  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:36.005149  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:36.042989  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:36.043054  632515 retry.go:31] will retry after 226.013631ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:36.306592  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:36.306631  632515 retry.go:31] will retry after 437.098541ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:36.780356  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:36.780417  632515 retry.go:31] will retry after 807.742041ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:37.625924  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:37.626016  632515 provision.go:87] duration metric: took 4.176272444s to configureAuth
	W0917 00:51:37.626032  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:37.626046  632515 retry.go:31] will retry after 5.783821425s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:43.410502  632515 provision.go:84] configureAuth start
	I0917 00:51:43.410627  632515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-671025-m04
	I0917 00:51:43.429575  632515 provision.go:143] copyHostCerts
	I0917 00:51:43.429625  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:43.429656  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 00:51:43.429668  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 00:51:43.429730  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 00:51:43.429808  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:43.429829  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 00:51:43.429836  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 00:51:43.429856  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 00:51:43.429899  632515 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:43.429915  632515 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 00:51:43.429921  632515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 00:51:43.429938  632515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 00:51:43.429988  632515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.ha-671025-m04 san=[127.0.0.1 192.168.49.5 ha-671025-m04 localhost minikube]
	I0917 00:51:43.676937  632515 provision.go:177] copyRemoteCerts
	I0917 00:51:43.677016  632515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:51:43.677067  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:43.695948  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:43.731552  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:43.731597  632515 retry.go:31] will retry after 371.063976ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:44.139453  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:44.139502  632515 retry.go:31] will retry after 537.52019ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:44.712824  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:44.712860  632515 retry.go:31] will retry after 641.219509ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:45.391773  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.391868  632515 provision.go:87] duration metric: took 1.981318846s to configureAuth
	W0917 00:51:45.391880  632515 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.391895  632515 ubuntu.go:202] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.391904  632515 machine.go:96] duration metric: took 10m57.675312059s to provisionDockerMachine
	I0917 00:51:45.391996  632515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:51:45.392045  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:45.410677  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:45.447453  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.447492  632515 retry.go:31] will retry after 219.806567ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:45.704966  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.704997  632515 retry.go:31] will retry after 253.108883ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:45.994383  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:45.994455  632515 retry.go:31] will retry after 303.312227ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:46.334082  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.334176  632515 retry.go:31] will retry after 198.442889ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.533637  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:46.552382  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:46.588617  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.588648  632515 retry.go:31] will retry after 246.644284ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:46.871879  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:46.871908  632515 retry.go:31] will retry after 253.158895ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.160355  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:47.160421  632515 retry.go:31] will retry after 673.328529ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.870783  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.870870  632515 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:47.870881  632515 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:47.870941  632515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:51:47.870985  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:47.890542  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:47.926837  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:47.926888  632515 retry.go:31] will retry after 191.979643ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:48.155789  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:48.155822  632515 retry.go:31] will retry after 496.333376ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:48.688512  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:48.688545  632515 retry.go:31] will retry after 707.042596ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:49.431589  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.431677  632515 retry.go:31] will retry after 160.419001ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.592966  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:49.613595  632515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33213 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/ha-671025-m04/id_rsa Username:docker}
	W0917 00:51:49.649915  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.649955  632515 retry.go:31] will retry after 205.246327ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:49.891651  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:49.891686  632515 retry.go:31] will retry after 286.771592ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:50.215702  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:50.215742  632515 retry.go:31] will retry after 813.162049ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065001  632515 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065091  632515 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065109  632515 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:51.065120  632515 fix.go:56] duration metric: took 11m3.67745899s for fixHost
	I0917 00:51:51.065132  632515 start.go:83] releasing machines lock for "ha-671025-m04", held for 11m3.677487819s
	W0917 00:51:51.065151  632515 start.go:714] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0917 00:51:51.065294  632515 out.go:285] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:51.065310  632515 start.go:729] Will try again in 5 seconds ...
	I0917 00:51:56.068712  632515 start.go:360] acquireMachinesLock for ha-671025-m04: {Name:mka8d143727db583191b041d9fdffdc34290d3fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:51:56.068825  632515 start.go:364] duration metric: took 72.54µs to acquireMachinesLock for "ha-671025-m04"
	I0917 00:51:56.068857  632515 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:51:56.068866  632515 fix.go:54] fixHost starting: m04
	I0917 00:51:56.069146  632515 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:51:56.089434  632515 fix.go:112] recreateIfNeeded on ha-671025-m04: state=Running err=<nil>
	W0917 00:51:56.089467  632515 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:51:56.091315  632515 out.go:252] * Updating the running docker "ha-671025-m04" container ...
	I0917 00:51:56.091363  632515 machine.go:93] provisionDockerMachine start ...
	I0917 00:51:56.091481  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:51:56.111050  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:51:56.111338  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:51:56.111353  632515 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:51:56.147286  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:51:59.186003  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:02.224065  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:05.261128  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:08.298507  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:11.336655  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:14.374172  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:17.411005  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:20.448133  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:23.484595  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:26.522064  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:29.561855  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:32.599017  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:35.637968  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:38.676013  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:41.715044  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:44.753147  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:47.789890  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:50.827732  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:53.865517  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:56.901256  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:52:59.937736  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:02.975072  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:06.012018  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:09.050985  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:12.087769  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:15.125608  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:18.163655  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:21.202155  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:24.242132  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:27.279947  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:30.316610  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:33.353948  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:36.392886  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:39.431538  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:42.470338  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:45.508895  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:48.546547  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:51.584487  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:54.622720  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:53:57.659585  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:00.696914  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:03.734601  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:06.771719  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:09.808339  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:12.845310  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:15.883169  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:18.921190  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:21.957649  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:24.995930  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:28.032738  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:31.069581  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:34.108291  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:37.146962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:40.184957  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:43.225066  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:46.263427  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:49.299798  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:52.337483  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:55.373484  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:54:58.375202  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:54:58.375243  632515 ubuntu.go:182] provisioning hostname "ha-671025-m04"
	I0917 00:54:58.375323  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:54:58.394506  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:54:58.394819  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:54:58.394837  632515 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671025-m04 && echo "ha-671025-m04" | sudo tee /etc/hostname
	I0917 00:54:58.431690  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:01.471166  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:04.510103  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:07.546274  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:10.582544  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:13.619501  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:16.657477  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:19.695282  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:22.731579  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:25.768876  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:28.806301  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:31.842634  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:34.880236  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:37.918250  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:40.956882  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:43.993751  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:47.031600  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:50.069536  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:53.108071  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:56.146453  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:55:59.184185  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:02.221185  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:05.258874  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:08.296468  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:11.334381  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:14.373700  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:17.410753  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:20.448244  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:23.487061  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:26.525922  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:29.564962  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:32.601712  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:35.638347  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:38.677091  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:41.715243  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:44.753492  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:47.790755  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:50.827016  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:53.864846  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:56.901158  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:56:59.937763  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:02.975137  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:06.013236  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:09.050745  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:12.087672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:15.126672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:18.162247  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:21.199672  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:24.236364  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:27.272510  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:30.308139  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:33.345903  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:36.384679  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:39.422001  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:42.457940  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:45.493949  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:48.530953  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:51.568902  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:54.606598  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:57:57.643384  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0917 00:58:00.644556  632515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:58:00.644656  632515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-671025-m04
	I0917 00:58:00.664645  632515 main.go:141] libmachine: Using SSH client type: native
	I0917 00:58:00.664896  632515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33213 <nil> <nil>}
	I0917 00:58:00.664913  632515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671025-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671025-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671025-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:58:00.701043  632515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	
	
	==> CRI-O <==
	Sep 17 00:40:43 ha-671025 crio[562]: time="2025-09-17 00:40:43.044973311Z" level=info msg="Starting container: 673c879adf02feee4e3cff70bd481435d95b45bebb108eb4549ff5699e1061ee" id=45c71342-89c2-4355-a5cd-5ebc27e4b57d name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:40:43 ha-671025 crio[562]: time="2025-09-17 00:40:43.054759211Z" level=info msg="Started container" PID=1339 containerID=673c879adf02feee4e3cff70bd481435d95b45bebb108eb4549ff5699e1061ee description=kube-system/coredns-66bc5c9577-mqh24/coredns id=45c71342-89c2-4355-a5cd-5ebc27e4b57d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e01a9d1334f17b8ebf76f2744f9bf0e5de06dc99c8b5d90967566b656c2150ce
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.503530585Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a6c5975e-0415-48f2-8a4e-ddfce50148e6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.503791897Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a6c5975e-0415-48f2-8a4e-ddfce50148e6 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.504516489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7bebac3a-6d79-428a-8af1-0fefd8ae794f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.504754743Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7bebac3a-6d79-428a-8af1-0fefd8ae794f name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.505639038Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a872c1b2-0d22-461a-b173-2eee81a476ee name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.505757902Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.518338070Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4bf58ca1189d69c45ca48ddf4a36adbce0ec24d887d5985eae405b89a32a5a9a/merged/etc/passwd: no such file or directory"
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.518377632Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4bf58ca1189d69c45ca48ddf4a36adbce0ec24d887d5985eae405b89a32a5a9a/merged/etc/group: no such file or directory"
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.573945199Z" level=info msg="Created container 7bd6786c999c7fd0843826f9a611f17a85e25a0577957448ee4aad56de3c87c1: kube-system/storage-provisioner/storage-provisioner" id=a872c1b2-0d22-461a-b173-2eee81a476ee name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.574652188Z" level=info msg="Starting container: 7bd6786c999c7fd0843826f9a611f17a85e25a0577957448ee4aad56de3c87c1" id=eae6b5a3-9a06-4718-89dd-56856d229ffe name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:41:13 ha-671025 crio[562]: time="2025-09-17 00:41:13.582124590Z" level=info msg="Started container" PID=1685 containerID=7bd6786c999c7fd0843826f9a611f17a85e25a0577957448ee4aad56de3c87c1 description=kube-system/storage-provisioner/storage-provisioner id=eae6b5a3-9a06-4718-89dd-56856d229ffe name=/runtime.v1.RuntimeService/StartContainer sandboxID=73bdbd1cf265ff065cd9a716a7233c1d060b4662e8c81949384a5034464817ad
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.526890982Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.531863455Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.531907280Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.531932073Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.536526011Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.536558884Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.536586361Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.540835065Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.540871739Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.540890686Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.544986835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 00:41:23 ha-671025 crio[562]: time="2025-09-17 00:41:23.545021195Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7bd6786c999c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       5                   73bdbd1cf265f       storage-provisioner
	673c879adf02f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 minutes ago      Running             coredns                   2                   e01a9d1334f17       coredns-66bc5c9577-mqh24
	33c4f16c5167b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   17 minutes ago      Running             kindnet-cni               2                   5bc2e693bc762       kindnet-9zvhz
	c0e87deb15c3a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 minutes ago      Running             coredns                   2                   0c8fa9de1efd6       coredns-66bc5c9577-vfj56
	cbd7c5ea0096f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago      Running             busybox                   2                   db5b5c3e41bc3       busybox-7b57f96db7-wj4r5
	df3682e8ebaab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Exited              storage-provisioner       4                   73bdbd1cf265f       storage-provisioner
	309dd4524fd6c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   17 minutes ago      Running             kube-proxy                2                   23782dcc6ef7a       kube-proxy-f58dt
	881fdaefda118       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   17 minutes ago      Running             kube-apiserver            2                   663c2fdb6a782       kube-apiserver-ha-671025
	939904409ad77       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   17 minutes ago      Running             kube-controller-manager   2                   cc5007dc0bc11       kube-controller-manager-ha-671025
	b2732d3309fd1       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   17 minutes ago      Running             kube-vip                  2                   f79cd4d6fce11       kube-vip-ha-671025
	5e41c9a2f042d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   17 minutes ago      Running             kube-scheduler            2                   3c6cfaaaada7c       kube-scheduler-ha-671025
	ef9fd7a5f0657       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   17 minutes ago      Running             etcd                      2                   adb3a22e9933c       etcd-ha-671025
	
	
	==> coredns [673c879adf02feee4e3cff70bd481435d95b45bebb108eb4549ff5699e1061ee] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60299 - 61338 "HINFO IN 3841739346528860420.8653347582819063829. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01901975s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [c0e87deb15c3a772c18b048fac959200d37dd908a79057233b5d497622b9985b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38194 - 54812 "HINFO IN 4802718557526827409.779242311870971632. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.022606454s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-671025
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_28_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:28:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:56:30 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:56:30 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:56:30 +0000   Wed, 17 Sep 2025 00:28:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:56:30 +0000   Wed, 17 Sep 2025 00:28:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-671025
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd6117be4c0a4284be8c970e21652e4a
	  System UUID:                3f139a28-0338-43b0-8ed0-9128b9dcda65
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-wj4r5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-66bc5c9577-mqh24             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 coredns-66bc5c9577-vfj56             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 etcd-ha-671025                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-9zvhz                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-671025             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-671025    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-f58dt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-671025             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-671025                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x8 over 29m)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     29m                kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           29m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeReady                29m                kubelet          Node ha-671025 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           26m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     25m (x8 over 25m)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-671025 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-671025 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-671025 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-671025 event: Registered Node ha-671025 in Controller
	
	
	Name:               ha-671025-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671025-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=ha-671025
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_17T00_29_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:29:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671025-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:57:54 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:57:54 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:57:54 +0000   Wed, 17 Sep 2025 00:29:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:57:54 +0000   Wed, 17 Sep 2025 00:29:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-671025-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ae6e99f16b84c7382cbfe66aeb55665
	  System UUID:                7d7ccba3-1786-4f88-a69c-4a852e967ea0
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-zw5tc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 etcd-ha-671025-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-7scsq                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-671025-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-671025-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-4k8lz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-671025-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-671025-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  RegisteredNode           28m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     26m (x8 over 26m)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           26m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 25m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x8 over 25m)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-671025-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-671025-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-671025-m02 event: Registered Node ha-671025-m02 in Controller
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> etcd [ef9fd7a5f065787410db9cbe176f6f1e916deaae443ad0a27ff662f26b49d595] <==
	{"level":"warn","ts":"2025-09-17T00:40:41.205519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.212315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.219471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.226429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.234292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.241582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.248593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.256821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.264267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.272220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.279067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.287295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.295419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.304094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.309869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.323332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.337559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.343857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:40:41.387092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60552","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:50:40.833358Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3568}
	{"level":"info","ts":"2025-09-17T00:50:40.886407Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3568,"took":"52.532505ms","hash":1249687039,"current-db-size-bytes":7200768,"current-db-size":"7.2 MB","current-db-size-in-use-bytes":2457600,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2025-09-17T00:50:40.886459Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1249687039,"revision":3568,"compact-revision":-1}
	{"level":"info","ts":"2025-09-17T00:55:40.840415Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":4196}
	{"level":"info","ts":"2025-09-17T00:55:40.857158Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":4196,"took":"16.291453ms","hash":3529687295,"current-db-size-bytes":7200768,"current-db-size":"7.2 MB","current-db-size-in-use-bytes":2146304,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-17T00:55:40.857224Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3529687295,"revision":4196,"compact-revision":3568}
	
	
	==> kernel <==
	 00:58:03 up  3:40,  0 users,  load average: 0.29, 0.28, 1.20
	Linux ha-671025 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [33c4f16c5167b29b049f93933d514dc017b522ea0e76cc43f5a8fe3eba84a902] <==
	I0917 00:57:03.527053       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:57:13.530485       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:57:13.530530       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:57:13.530766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:57:13.530781       1 main.go:301] handling current node
	I0917 00:57:23.533490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:57:23.533533       1 main.go:301] handling current node
	I0917 00:57:23.533557       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:57:23.533562       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:57:33.535483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:57:33.535514       1 main.go:301] handling current node
	I0917 00:57:33.535530       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:57:33.535535       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:57:43.530240       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:57:43.530281       1 main.go:301] handling current node
	I0917 00:57:43.530306       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:57:43.530313       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:57:53.530114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:57:53.530165       1 main.go:301] handling current node
	I0917 00:57:53.530184       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:57:53.530189       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:58:03.526854       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0917 00:58:03.526886       1 main.go:324] Node ha-671025-m02 has CIDR [10.244.1.0/24] 
	I0917 00:58:03.527047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:58:03.527056       1 main.go:301] handling current node
	
	
	==> kube-apiserver [881fdaefda118a66842bac8f4a5c129c196dccc90decb4c7ba8148ae8ae4202b] <==
	I0917 00:43:30.817010       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:44:09.049158       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:44:34.076174       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:45:35.486099       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:45:37.580126       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:46:56.176458       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:46:56.887818       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:47:56.924921       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:48:02.575568       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:49:10.120328       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:49:20.739421       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:50:26.950430       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:50:41.929368       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 00:50:43.197360       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:51:28.742197       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:52:07.987513       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:52:39.826807       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:53:19.942584       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:54:05.873801       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:54:23.601274       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:55:25.720068       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:55:35.642099       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:56:53.198106       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:56:54.018211       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:57:56.031860       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [939904409ad77a2fc09eadbf445fe900ce24ccc4275bf93dfc1aed5e7a941726] <==
	I0917 00:40:45.431908       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E0917 00:41:05.381699       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:05.381731       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:05.381737       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:05.381741       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:05.381746       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:25.382164       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:25.382211       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:25.382220       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:25.382227       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	E0917 00:41:25.382236       1 gc_controller.go:151] "Failed to get node" err="node \"ha-671025-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671025-m03"
	I0917 00:41:25.393285       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9w6f7"
	I0917 00:41:25.415172       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9w6f7"
	I0917 00:41:25.415222       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-671025-m03"
	I0917 00:41:25.437683       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-671025-m03"
	I0917 00:41:25.437802       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-q96zd"
	I0917 00:41:25.458216       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-q96zd"
	I0917 00:41:25.458259       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-671025-m03"
	I0917 00:41:25.481835       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-671025-m03"
	I0917 00:41:25.481879       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-671025-m03"
	I0917 00:41:25.500305       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-671025-m03"
	I0917 00:41:25.500454       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-671025-m03"
	I0917 00:41:25.519939       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-671025-m03"
	I0917 00:41:25.519977       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-671025-m03"
	I0917 00:41:25.540803       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-671025-m03"
	
	
	==> kube-proxy [309dd4524fd6c33f874c33a5e89fb357efd025d3bdcff18ec486c23ba475aba9] <==
	I0917 00:40:43.101087       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:40:43.172044       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 00:40:46.233773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-671025&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0917 00:40:47.472673       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:40:47.472720       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:40:47.472826       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:40:47.493888       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:40:47.493962       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:40:47.501322       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:40:47.502025       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:40:47.502074       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:40:47.504817       1 config.go:200] "Starting service config controller"
	I0917 00:40:47.504840       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:40:47.504885       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:40:47.506152       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:40:47.507738       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:40:47.504890       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:40:47.508662       1 config.go:309] "Starting node config controller"
	I0917 00:40:47.508685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:40:47.605039       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:40:47.608353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:40:47.608376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:40:47.608819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5e41c9a2f042d57188a38266da0078263acc2fb7aab88eaebc87ad8a5d8cfe08] <==
	I0917 00:40:39.846479       1 serving.go:386] Generated self-signed cert in-memory
	W0917 00:40:41.916101       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 00:40:41.916193       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 00:40:41.916208       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 00:40:41.916222       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 00:40:41.948987       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:40:41.949030       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:40:41.952651       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:40:41.952702       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:40:41.952934       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:40:41.953050       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:40:42.053724       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:55:58 ha-671025 kubelet[719]: E0917 00:55:58.525637     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070558525316944  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:08 ha-671025 kubelet[719]: E0917 00:56:08.526879     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070568526631959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:08 ha-671025 kubelet[719]: E0917 00:56:08.526912     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070568526631959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:18 ha-671025 kubelet[719]: E0917 00:56:18.528281     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070578528027533  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:18 ha-671025 kubelet[719]: E0917 00:56:18.528331     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070578528027533  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:28 ha-671025 kubelet[719]: E0917 00:56:28.529968     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070588529688406  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:28 ha-671025 kubelet[719]: E0917 00:56:28.530044     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070588529688406  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:38 ha-671025 kubelet[719]: E0917 00:56:38.531431     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070598531150657  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:38 ha-671025 kubelet[719]: E0917 00:56:38.531482     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070598531150657  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:48 ha-671025 kubelet[719]: E0917 00:56:48.532774     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070608532520316  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:48 ha-671025 kubelet[719]: E0917 00:56:48.532819     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070608532520316  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:58 ha-671025 kubelet[719]: E0917 00:56:58.534082     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070618533831311  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:56:58 ha-671025 kubelet[719]: E0917 00:56:58.534115     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070618533831311  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:08 ha-671025 kubelet[719]: E0917 00:57:08.535655     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070628535369894  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:08 ha-671025 kubelet[719]: E0917 00:57:08.535698     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070628535369894  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:18 ha-671025 kubelet[719]: E0917 00:57:18.536766     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070638536556908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:18 ha-671025 kubelet[719]: E0917 00:57:18.536801     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070638536556908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:28 ha-671025 kubelet[719]: E0917 00:57:28.537998     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070648537725279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:28 ha-671025 kubelet[719]: E0917 00:57:28.538032     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070648537725279  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:38 ha-671025 kubelet[719]: E0917 00:57:38.539217     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070658538986484  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:38 ha-671025 kubelet[719]: E0917 00:57:38.539260     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070658538986484  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:48 ha-671025 kubelet[719]: E0917 00:57:48.540547     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070668540293617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:48 ha-671025 kubelet[719]: E0917 00:57:48.540583     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070668540293617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:58 ha-671025 kubelet[719]: E0917 00:57:58.541967     719 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758070678541712311  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 17 00:57:58 ha-671025 kubelet[719]: E0917 00:57:58.542002     719 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758070678541712311  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-671025 -n ha-671025
helpers_test.go:269: (dbg) Run:  kubectl --context ha-671025 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-vmzxx
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-671025 describe pod busybox-7b57f96db7-vmzxx
helpers_test.go:290: (dbg) kubectl --context ha-671025 describe pod busybox-7b57f96db7-vmzxx:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-vmzxx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gsm85 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-gsm85:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  18m                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  18m                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  17m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  7m22s (x2 over 12m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  18m (x2 over 18m)    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  18m (x2 over 18m)    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  2m21s (x4 over 17m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (1053.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (446.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.0424354s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-790254
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-790254: (12.348509484s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-790254 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-790254 status --format={{.Host}}: exit status 7 (70.998873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.484775737s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-790254 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (92.720152ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-790254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-790254
	    minikube start -p kubernetes-upgrade-790254 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7902542 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-790254 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2m16.531858418s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-790254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-790254" primary control-plane node in "kubernetes-upgrade-790254" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:19:00.734995  819928 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:19:00.735337  819928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:19:00.735348  819928 out.go:374] Setting ErrFile to fd 2...
	I0917 01:19:00.735352  819928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:19:00.735596  819928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:19:00.736194  819928 out.go:368] Setting JSON to false
	I0917 01:19:00.737729  819928 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14484,"bootTime":1758057457,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:19:00.737840  819928 start.go:140] virtualization: kvm guest
	I0917 01:19:00.740411  819928 out.go:179] * [kubernetes-upgrade-790254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:19:00.742769  819928 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:19:00.742782  819928 notify.go:220] Checking for updates...
	I0917 01:19:00.745670  819928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:19:00.747120  819928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:19:00.748448  819928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:19:00.753075  819928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:19:00.754526  819928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:19:00.756216  819928 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:19:00.756769  819928 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:19:00.786805  819928 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:19:00.787031  819928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:19:00.859603  819928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-09-17 01:19:00.84659496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:19:00.859716  819928 docker.go:318] overlay module found
	I0917 01:19:00.861407  819928 out.go:179] * Using the docker driver based on existing profile
	I0917 01:19:00.862616  819928 start.go:304] selected driver: docker
	I0917 01:19:00.862637  819928 start.go:918] validating driver "docker" against &{Name:kubernetes-upgrade-790254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-790254 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:19:00.862756  819928 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:19:00.863559  819928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:19:00.951672  819928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-09-17 01:19:00.937155682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:19:00.952037  819928 cni.go:84] Creating CNI manager for ""
	I0917 01:19:00.952112  819928 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 01:19:00.952162  819928 start.go:348] cluster config:
	{Name:kubernetes-upgrade-790254 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-790254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseIn
terval:1m0s}
	I0917 01:19:00.953865  819928 out.go:179] * Starting "kubernetes-upgrade-790254" primary control-plane node in "kubernetes-upgrade-790254" cluster
	I0917 01:19:00.955480  819928 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:19:00.957052  819928 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:19:00.958229  819928 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:19:00.958296  819928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:19:00.958312  819928 cache.go:58] Caching tarball of preloaded images
	I0917 01:19:00.958321  819928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:19:00.958454  819928 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:19:00.958470  819928 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:19:00.958593  819928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kubernetes-upgrade-790254/config.json ...
	I0917 01:19:00.985484  819928 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:19:00.985509  819928 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:19:00.985534  819928 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:19:00.985565  819928 start.go:360] acquireMachinesLock for kubernetes-upgrade-790254: {Name:mk871e5aa73c184e777131ec5625b031ad3c4c6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:19:00.985656  819928 start.go:364] duration metric: took 50.628µs to acquireMachinesLock for "kubernetes-upgrade-790254"
	I0917 01:19:00.985681  819928 start.go:96] Skipping create...Using existing machine configuration
	I0917 01:19:00.985691  819928 fix.go:54] fixHost starting: 
	I0917 01:19:00.985979  819928 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-790254 --format={{.State.Status}}
	I0917 01:19:01.007226  819928 fix.go:112] recreateIfNeeded on kubernetes-upgrade-790254: state=Running err=<nil>
	W0917 01:19:01.007260  819928 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 01:19:01.008890  819928 out.go:252] * Updating the running docker "kubernetes-upgrade-790254" container ...
	I0917 01:19:01.008935  819928 machine.go:93] provisionDockerMachine start ...
	I0917 01:19:01.009024  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:01.033745  819928 main.go:141] libmachine: Using SSH client type: native
	I0917 01:19:01.034082  819928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I0917 01:19:01.034102  819928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:19:01.183887  819928 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-790254
	
	I0917 01:19:01.183932  819928 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-790254"
	I0917 01:19:01.184001  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:01.204258  819928 main.go:141] libmachine: Using SSH client type: native
	I0917 01:19:01.204515  819928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I0917 01:19:01.204531  819928 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-790254 && echo "kubernetes-upgrade-790254" | sudo tee /etc/hostname
	I0917 01:19:01.356305  819928 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-790254
	
	I0917 01:19:01.356413  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:01.380317  819928 main.go:141] libmachine: Using SSH client type: native
	I0917 01:19:01.380578  819928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I0917 01:19:01.380601  819928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-790254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-790254/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-790254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:19:01.528379  819928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:19:01.528427  819928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:19:01.528453  819928 ubuntu.go:190] setting up certificates
	I0917 01:19:01.528467  819928 provision.go:84] configureAuth start
	I0917 01:19:01.528541  819928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-790254
	I0917 01:19:01.548007  819928 provision.go:143] copyHostCerts
	I0917 01:19:01.548069  819928 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:19:01.548100  819928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:19:01.548202  819928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:19:01.548343  819928 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:19:01.548359  819928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:19:01.548415  819928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:19:01.548505  819928 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:19:01.548515  819928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:19:01.548553  819928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:19:01.548632  819928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-790254 san=[127.0.0.1 192.168.94.2 kubernetes-upgrade-790254 localhost minikube]
	I0917 01:19:01.774158  819928 provision.go:177] copyRemoteCerts
	I0917 01:19:01.774241  819928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:19:01.774309  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:01.794486  819928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kubernetes-upgrade-790254/id_rsa Username:docker}
	I0917 01:19:01.896940  819928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:19:01.924767  819928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0917 01:19:01.952845  819928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:19:01.980907  819928 provision.go:87] duration metric: took 452.424173ms to configureAuth
	I0917 01:19:01.980935  819928 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:19:01.981112  819928 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:19:01.981213  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:02.001451  819928 main.go:141] libmachine: Using SSH client type: native
	I0917 01:19:02.001774  819928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I0917 01:19:02.001799  819928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:19:02.373542  819928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:19:02.373572  819928 machine.go:96] duration metric: took 1.364628217s to provisionDockerMachine
	I0917 01:19:02.373587  819928 start.go:293] postStartSetup for "kubernetes-upgrade-790254" (driver="docker")
	I0917 01:19:02.373601  819928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:19:02.373662  819928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:19:02.373726  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:02.398000  819928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kubernetes-upgrade-790254/id_rsa Username:docker}
	I0917 01:19:02.509503  819928 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:19:02.514863  819928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:19:02.514907  819928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:19:02.514918  819928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:19:02.514928  819928 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:19:02.514946  819928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:19:02.515014  819928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:19:02.515136  819928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:19:02.515296  819928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:19:02.530879  819928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:19:02.570827  819928 start.go:296] duration metric: took 197.217247ms for postStartSetup
	I0917 01:19:02.570922  819928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:19:02.570961  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:02.593896  819928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kubernetes-upgrade-790254/id_rsa Username:docker}
	I0917 01:19:02.693752  819928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:19:02.699617  819928 fix.go:56] duration metric: took 1.713914876s for fixHost
	I0917 01:19:02.699646  819928 start.go:83] releasing machines lock for "kubernetes-upgrade-790254", held for 1.713975689s
	I0917 01:19:02.699722  819928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-790254
	I0917 01:19:02.721291  819928 ssh_runner.go:195] Run: cat /version.json
	I0917 01:19:02.721361  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:02.721368  819928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:19:02.721443  819928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-790254
	I0917 01:19:02.742998  819928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kubernetes-upgrade-790254/id_rsa Username:docker}
	I0917 01:19:02.743609  819928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kubernetes-upgrade-790254/id_rsa Username:docker}
	I0917 01:19:02.948941  819928 ssh_runner.go:195] Run: systemctl --version
	I0917 01:19:02.960704  819928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:19:03.122010  819928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:19:03.128113  819928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:19:03.140941  819928 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:19:03.141019  819928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:19:03.153743  819928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 01:19:03.153772  819928 start.go:495] detecting cgroup driver to use...
	I0917 01:19:03.153808  819928 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:19:03.153852  819928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:19:03.174997  819928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:19:03.193675  819928 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:19:03.193739  819928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:19:03.212587  819928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:19:03.230563  819928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:19:03.373283  819928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:19:03.506633  819928 docker.go:234] disabling docker service ...
	I0917 01:19:03.506703  819928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:19:03.521869  819928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:19:03.538552  819928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:19:03.672540  819928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:19:03.792102  819928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:19:03.811765  819928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:19:03.835379  819928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:19:03.835457  819928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:19:03.850027  819928 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:19:03.850100  819928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:19:03.863766  819928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:19:03.877380  819928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:19:03.891561  819928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:19:03.907051  819928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:19:03.922342  819928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:19:03.939171  819928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:19:03.953281  819928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:19:03.965315  819928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:19:03.978686  819928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:19:04.119781  819928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:34.336639  819928 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.216799573s)
	I0917 01:20:34.336681  819928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:34.336738  819928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:34.341531  819928 start.go:563] Will wait 60s for crictl version
	I0917 01:20:34.341604  819928 ssh_runner.go:195] Run: which crictl
	I0917 01:20:34.345267  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:34.377576  819928 retry.go:31] will retry after 5.380854727s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:34Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:20:39.761559  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:39.801897  819928 retry.go:31] will retry after 22.249154003s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:39Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:02.051660  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:02.085236  819928 retry.go:31] will retry after 15.073168141s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:17.159591  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:17.198360  819928 out.go:203] 
	W0917 01:21:17.199706  819928 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:17Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:17Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:17.199729  819928 out.go:285] * 
	* 
	W0917 01:21:17.202453  819928 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:17.204643  819928 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-790254 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-09-17 01:21:17.21538384 +0000 UTC m=+5584.682727417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-790254
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-790254:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "014c88f247c2ab0e845371669f0592ff23339bcea710b5d559df6990f1106493",
	        "Created": "2025-09-17T01:14:04.100605458Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 763100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T01:14:35.312369624Z",
	            "FinishedAt": "2025-09-17T01:14:34.590492401Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/014c88f247c2ab0e845371669f0592ff23339bcea710b5d559df6990f1106493/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/014c88f247c2ab0e845371669f0592ff23339bcea710b5d559df6990f1106493/hostname",
	        "HostsPath": "/var/lib/docker/containers/014c88f247c2ab0e845371669f0592ff23339bcea710b5d559df6990f1106493/hosts",
	        "LogPath": "/var/lib/docker/containers/014c88f247c2ab0e845371669f0592ff23339bcea710b5d559df6990f1106493/014c88f247c2ab0e845371669f0592ff23339bcea710b5d559df6990f1106493-json.log",
	        "Name": "/kubernetes-upgrade-790254",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-790254:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-790254",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "014c88f247c2ab0e845371669f0592ff23339bcea710b5d559df6990f1106493",
	                "LowerDir": "/var/lib/docker/overlay2/acffa2d5e6c470492211e53c3616b348b2843524f2bcce3847222ae97099f9f7-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/acffa2d5e6c470492211e53c3616b348b2843524f2bcce3847222ae97099f9f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/acffa2d5e6c470492211e53c3616b348b2843524f2bcce3847222ae97099f9f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/acffa2d5e6c470492211e53c3616b348b2843524f2bcce3847222ae97099f9f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-790254",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-790254/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-790254",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-790254",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-790254",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67481c3d6e2e8fb026579bb84a0b9f1c0c0399c16371e2756285751fdd226f2a",
	            "SandboxKey": "/var/run/docker/netns/67481c3d6e2e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-790254": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:9e:5c:0f:20:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f0a55cba78dfa306a82ecf5e1a08e53409222e4f9d4f0f3ac72acc168175a41",
	                    "EndpointID": "c8a67409831426e7d668d6c08683d685a1508fd18b31af4125326dd204702d3f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-790254",
	                        "014c88f247c2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-790254 -n kubernetes-upgrade-790254
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-790254 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-333616 sudo cat /etc/docker/daemon.json                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo docker system info                                                                                                   │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cri-dockerd --version                                                                                                │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat containerd --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/containerd/config.toml                                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo containerd config dump                                                                                               │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat crio --no-pager                                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo crio config                                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ delete  │ -p auto-333616                                                                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ start   │ -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-333616               │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ image   │ default-k8s-diff-port-377743 image list --format=json                                                                                    │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	│ pause   │ -p default-k8s-diff-port-377743 --alsologtostderr -v=1                                                                                   │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │                     │
	│ image   │ embed-certs-748988 image list --format=json                                                                                              │ embed-certs-748988           │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	│ pause   │ -p embed-certs-748988 --alsologtostderr -v=1                                                                                             │ embed-certs-748988           │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	│ unpause │ -p embed-certs-748988 --alsologtostderr -v=1                                                                                             │ embed-certs-748988           │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	│ delete  │ -p default-k8s-diff-port-377743                                                                                                          │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │                     │
	│ delete  │ -p embed-certs-748988                                                                                                                    │ embed-certs-748988           │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:20:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:20:46.991253  841202 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:46.991355  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991363  841202 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:46.991367  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991948  841202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:46.993103  841202 out.go:368] Setting JSON to false
	I0917 01:20:46.994427  841202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14590,"bootTime":1758057457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:46.994531  841202 start.go:140] virtualization: kvm guest
	I0917 01:20:46.996762  841202 out.go:179] * [kindnet-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:46.998033  841202 notify.go:220] Checking for updates...
	I0917 01:20:46.998040  841202 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:46.999333  841202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:47.000646  841202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:47.002223  841202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:47.003668  841202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:47.005002  841202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:47.006954  841202 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007104  841202 config.go:182] Loaded profile config "embed-certs-748988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007208  841202 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007331  841202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:47.034761  841202 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:47.034876  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.096866  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.086442486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.097016  841202 docker.go:318] overlay module found
	I0917 01:20:47.099127  841202 out.go:179] * Using the docker driver based on user configuration
	I0917 01:20:47.100598  841202 start.go:304] selected driver: docker
	I0917 01:20:47.100620  841202 start.go:918] validating driver "docker" against <nil>
	I0917 01:20:47.100634  841202 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:47.101213  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.157653  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.147017932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.157843  841202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 01:20:47.158047  841202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:47.159808  841202 out.go:179] * Using Docker driver with root privileges
	I0917 01:20:47.161165  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:47.161185  841202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 01:20:47.161271  841202 start.go:348] cluster config:
	{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInte
rval:1m0s}
	I0917 01:20:47.162725  841202 out.go:179] * Starting "kindnet-333616" primary control-plane node in "kindnet-333616" cluster
	I0917 01:20:47.164093  841202 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:47.165424  841202 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:47.166669  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.166713  841202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:47.166725  841202 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:47.166780  841202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:47.166823  841202 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:47.166834  841202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:47.166922  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:47.166937  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json: {Name:mkd38d1752014f4bab9dae52a7872fb8a5cc71fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:47.192914  841202 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:47.192938  841202 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:47.192970  841202 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:47.193004  841202 start.go:360] acquireMachinesLock for kindnet-333616: {Name:mkc24d8ed730ab1614498d5beb0270c845773667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:47.193133  841202 start.go:364] duration metric: took 104.991µs to acquireMachinesLock for "kindnet-333616"
	I0917 01:20:47.193181  841202 start.go:93] Provisioning new machine with config: &{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:20:47.193276  841202 start.go:125] createHost starting for "" (driver="docker")
	W0917 01:20:45.672555  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:47.672815  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:47.195051  841202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 01:20:47.195285  841202 start.go:159] libmachine.API.Create for "kindnet-333616" (driver="docker")
	I0917 01:20:47.195320  841202 client.go:168] LocalClient.Create starting
	I0917 01:20:47.195405  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 01:20:47.195446  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195462  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195517  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 01:20:47.195536  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195549  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195889  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 01:20:47.213519  841202 cli_runner.go:211] docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 01:20:47.213608  841202 network_create.go:284] running [docker network inspect kindnet-333616] to gather additional debugging logs...
	I0917 01:20:47.213640  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616
	W0917 01:20:47.231055  841202 cli_runner.go:211] docker network inspect kindnet-333616 returned with exit code 1
	I0917 01:20:47.231092  841202 network_create.go:287] error running [docker network inspect kindnet-333616]: docker network inspect kindnet-333616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-333616 not found
	I0917 01:20:47.231127  841202 network_create.go:289] output of [docker network inspect kindnet-333616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-333616 not found
	
	** /stderr **
	I0917 01:20:47.231231  841202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:47.249036  841202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
	I0917 01:20:47.249865  841202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7514a86599 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:c0:7e:cc:23:dc} reservation:<nil>}
	I0917 01:20:47.250378  841202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0cef36e94e8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:db:fd:7a:23:9f} reservation:<nil>}
	I0917 01:20:47.250966  841202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b9dd3e2b39a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:6a:d6:f0:80:2b} reservation:<nil>}
	I0917 01:20:47.251698  841202 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2391a23950fb IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:6b:a9:b6:cd:fd} reservation:<nil>}
	I0917 01:20:47.252201  841202 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2f0a55cba78d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:b8:6b:32:ae:3d} reservation:<nil>}
	I0917 01:20:47.253017  841202 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d90400}
	I0917 01:20:47.253041  841202 network_create.go:124] attempt to create docker network kindnet-333616 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 01:20:47.253107  841202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-333616 kindnet-333616
	I0917 01:20:47.313030  841202 network_create.go:108] docker network kindnet-333616 192.168.103.0/24 created
	I0917 01:20:47.313138  841202 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-333616" container
	I0917 01:20:47.313224  841202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 01:20:47.331726  841202 cli_runner.go:164] Run: docker volume create kindnet-333616 --label name.minikube.sigs.k8s.io=kindnet-333616 --label created_by.minikube.sigs.k8s.io=true
	I0917 01:20:47.350777  841202 oci.go:103] Successfully created a docker volume kindnet-333616
	I0917 01:20:47.350848  841202 cli_runner.go:164] Run: docker run --rm --name kindnet-333616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --entrypoint /usr/bin/test -v kindnet-333616:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 01:20:47.744926  841202 oci.go:107] Successfully prepared a docker volume kindnet-333616
	I0917 01:20:47.744972  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.744994  841202 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 01:20:47.745059  841202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	W0917 01:20:50.174804  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:52.673786  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:52.004993  841202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.25985666s)
	I0917 01:20:52.005028  841202 kic.go:203] duration metric: took 4.26003048s to extract preloaded images to volume ...
	W0917 01:20:52.005133  841202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 01:20:52.005164  841202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 01:20:52.005202  841202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 01:20:52.066749  841202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-333616 --name kindnet-333616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-333616 --network kindnet-333616 --ip 192.168.103.2 --volume kindnet-333616:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 01:20:52.362306  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Running}}
	I0917 01:20:52.383555  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.406449  841202 cli_runner.go:164] Run: docker exec kindnet-333616 stat /var/lib/dpkg/alternatives/iptables
	I0917 01:20:52.459697  841202 oci.go:144] the created container "kindnet-333616" has a running status.
	I0917 01:20:52.459737  841202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa...
	I0917 01:20:52.716503  841202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 01:20:52.742117  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.761330  841202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 01:20:52.761355  841202 kic_runner.go:114] Args: [docker exec --privileged kindnet-333616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 01:20:52.809335  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.831209  841202 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:52.831331  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:52.852889  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:52.853249  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:52.853269  841202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:52.992938  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:52.992969  841202 ubuntu.go:182] provisioning hostname "kindnet-333616"
	I0917 01:20:52.993051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.013532  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.013764  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.013778  841202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-333616 && echo "kindnet-333616" | sudo tee /etc/hostname
	I0917 01:20:53.166881  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:53.166973  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.187352  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.187631  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.187658  841202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-333616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-333616/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-333616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:53.332338  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:53.332408  841202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:53.332452  841202 ubuntu.go:190] setting up certificates
	I0917 01:20:53.332472  841202 provision.go:84] configureAuth start
	I0917 01:20:53.332570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:53.352359  841202 provision.go:143] copyHostCerts
	I0917 01:20:53.352466  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:53.352481  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:53.352553  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:53.352652  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:53.352661  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:53.352689  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:53.352759  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:53.352766  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:53.352789  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:53.352841  841202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kindnet-333616 san=[127.0.0.1 192.168.103.2 kindnet-333616 localhost minikube]
	I0917 01:20:53.973038  841202 provision.go:177] copyRemoteCerts
	I0917 01:20:53.973143  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:53.973182  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.991696  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.091426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 01:20:54.121737  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:54.150762  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:54.179160  841202 provision.go:87] duration metric: took 846.669603ms to configureAuth
	I0917 01:20:54.179187  841202 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:54.179345  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:54.179463  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.198684  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:54.198909  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:54.198925  841202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:54.444483  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:54.444511  841202 machine.go:96] duration metric: took 1.613270939s to provisionDockerMachine
	I0917 01:20:54.444522  841202 client.go:171] duration metric: took 7.249193748s to LocalClient.Create
	I0917 01:20:54.444542  841202 start.go:167] duration metric: took 7.249257601s to libmachine.API.Create "kindnet-333616"
	I0917 01:20:54.444554  841202 start.go:293] postStartSetup for "kindnet-333616" (driver="docker")
	I0917 01:20:54.444572  841202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:54.444641  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:54.444690  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.463166  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.563892  841202 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:54.567735  841202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:54.567765  841202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:54.567772  841202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:54.567782  841202 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:54.567795  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:54.567855  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:54.567966  841202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:54.568108  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:54.577885  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:54.606690  841202 start.go:296] duration metric: took 162.114963ms for postStartSetup
	I0917 01:20:54.607107  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.625322  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:54.625758  841202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:54.625821  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.643332  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.737805  841202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:54.742465  841202 start.go:128] duration metric: took 7.549168533s to createHost
	I0917 01:20:54.742494  841202 start.go:83] releasing machines lock for "kindnet-333616", held for 7.549346209s
	I0917 01:20:54.742570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.759991  841202 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:54.760051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.760083  841202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:54.760154  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.778915  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.779306  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.952563  841202 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:54.957470  841202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:55.101309  841202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:55.106493  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.131742  841202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:55.131831  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.164272  841202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 01:20:55.164303  841202 start.go:495] detecting cgroup driver to use...
	I0917 01:20:55.164352  841202 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:55.164430  841202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:55.182732  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:55.194856  841202 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:55.194918  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:55.209368  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:55.224908  841202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:55.294219  841202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:55.366744  841202 docker.go:234] disabling docker service ...
	I0917 01:20:55.366805  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:55.386004  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:55.398281  841202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:55.471097  841202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:55.620605  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:55.632936  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:55.650751  841202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:55.650813  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.665355  841202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:55.665449  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.677774  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.688724  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.700141  841202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:55.711135  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.722974  841202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.741236  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.752869  841202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:55.762991  841202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:55.772774  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:55.842833  841202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:55.939370  841202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:55.939456  841202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:55.943491  841202 start.go:563] Will wait 60s for crictl version
	I0917 01:20:55.943562  841202 ssh_runner.go:195] Run: which crictl
	I0917 01:20:55.947384  841202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:55.984137  841202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 01:20:55.984206  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.022652  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.062561  841202 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 01:20:56.063985  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:56.081880  841202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 01:20:56.086073  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.098482  841202 kubeadm.go:875] updating cluster {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:20:56.098622  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:56.098685  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.169870  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.169898  841202 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:20:56.169953  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.206753  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.206784  841202 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:20:56.206794  841202 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0917 01:20:56.206913  841202 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-333616 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 01:20:56.207001  841202 ssh_runner.go:195] Run: crio config
	I0917 01:20:56.253538  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:56.253567  841202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:20:56.253590  841202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-333616 NodeName:kindnet-333616 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:20:56.253716  841202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-333616"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:20:56.253775  841202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:20:56.264146  841202 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:20:56.264224  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:20:56.274749  841202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0917 01:20:56.293906  841202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:20:56.316487  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 01:20:56.336550  841202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:20:56.340325  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.352936  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:56.418882  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:20:56.445037  841202 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616 for IP: 192.168.103.2
	I0917 01:20:56.445069  841202 certs.go:194] generating shared ca certs ...
	I0917 01:20:56.445096  841202 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.445265  841202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 01:20:56.445328  841202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 01:20:56.445342  841202 certs.go:256] generating profile certs ...
	I0917 01:20:56.445433  841202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key
	I0917 01:20:56.445452  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt with IP's: []
	I0917 01:20:56.575658  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt ...
	I0917 01:20:56.575692  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt: {Name:mke4c01e2ad680ec95da34129972695bc352dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.575918  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key ...
	I0917 01:20:56.575935  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key: {Name:mk196e199bf8e509067e257fa5978cc4017a9515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.576063  841202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883
	I0917 01:20:56.576083  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 01:20:56.891743  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 ...
	I0917 01:20:56.891776  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883: {Name:mk080638a3e062c43555f3e1bbede660cca9c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.891955  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 ...
	I0917 01:20:56.891969  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883: {Name:mkbe71ad29db0d31be773639ab90fdd03d84b089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.892043  841202 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt
	I0917 01:20:56.892145  841202 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key
	I0917 01:20:56.892212  841202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key
	I0917 01:20:56.892228  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt with IP's: []
	W0917 01:20:55.172587  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:57.173997  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:59.673374  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:57.205489  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt ...
	I0917 01:20:57.205524  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt: {Name:mkf6b5ecd44d0faf20e6e53acc7eeebe333eca17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205728  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key ...
	I0917 01:20:57.205746  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key: {Name:mk2b3f753e527ada6b46c8fd672f3b210e243668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205983  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 01:20:57.206033  841202 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 01:20:57.206049  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:20:57.206079  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:20:57.206110  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:20:57.206143  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 01:20:57.206196  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:57.206849  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:20:57.236316  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:20:57.264039  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:20:57.290903  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:20:57.316649  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:20:57.343336  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:20:57.369426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:20:57.395757  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:20:57.422129  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 01:20:57.452169  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:20:57.479060  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 01:20:57.505045  841202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:20:57.524210  841202 ssh_runner.go:195] Run: openssl version
	I0917 01:20:57.530236  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:20:57.540421  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544062  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544118  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.551188  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:20:57.561515  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 01:20:57.572283  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576261  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576323  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.583692  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 01:20:57.593924  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 01:20:57.604001  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608154  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608211  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.615475  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:20:57.625656  841202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:20:57.629541  841202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:20:57.629606  841202 kubeadm.go:392] StartCluster: {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:57.629685  841202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:20:57.629748  841202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:20:57.668306  841202 cri.go:89] found id: ""
	I0917 01:20:57.668384  841202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:20:57.679315  841202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:20:57.689592  841202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 01:20:57.689666  841202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:20:57.699255  841202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:20:57.699272  841202 kubeadm.go:157] found existing configuration files:
	
	I0917 01:20:57.699327  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:20:57.708879  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:20:57.708950  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:20:57.718406  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:20:57.728172  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:20:57.728251  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:20:57.737991  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.748427  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:20:57.748487  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.757822  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:20:57.767640  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:20:57.767708  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:20:57.776934  841202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 01:20:57.849477  841202 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 01:20:57.909176  841202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:21:01.172820  832418 pod_ready.go:94] pod "coredns-66bc5c9577-qqxrk" is "Ready"
	I0917 01:21:01.172851  832418 pod_ready.go:86] duration metric: took 38.505527826s for pod "coredns-66bc5c9577-qqxrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.175617  832418 pod_ready.go:83] waiting for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.179752  832418 pod_ready.go:94] pod "etcd-embed-certs-748988" is "Ready"
	I0917 01:21:01.179779  832418 pod_ready.go:86] duration metric: took 4.135657ms for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.182426  832418 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.186899  832418 pod_ready.go:94] pod "kube-apiserver-embed-certs-748988" is "Ready"
	I0917 01:21:01.186928  832418 pod_ready.go:86] duration metric: took 4.474792ms for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.189100  832418 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.371319  832418 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748988" is "Ready"
	I0917 01:21:01.371352  832418 pod_ready.go:86] duration metric: took 182.22498ms for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.570958  832418 pod_ready.go:83] waiting for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.970376  832418 pod_ready.go:94] pod "kube-proxy-2bkdq" is "Ready"
	I0917 01:21:01.970432  832418 pod_ready.go:86] duration metric: took 399.444446ms for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.171077  832418 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570435  832418 pod_ready.go:94] pod "kube-scheduler-embed-certs-748988" is "Ready"
	I0917 01:21:02.570467  832418 pod_ready.go:86] duration metric: took 399.360883ms for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570484  832418 pod_ready.go:40] duration metric: took 39.908444834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:21:02.617522  832418 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:21:02.619899  832418 out.go:179] * Done! kubectl is now configured to use "embed-certs-748988" cluster and "default" namespace by default
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 
	I0917 01:21:02.051660  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:02.085236  819928 retry.go:31] will retry after 15.073168141s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:08.749313  841202 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 01:21:08.749411  841202 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 01:21:08.749519  841202 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 01:21:08.749589  841202 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 01:21:08.749650  841202 kubeadm.go:310] OS: Linux
	I0917 01:21:08.749713  841202 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 01:21:08.749779  841202 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 01:21:08.749841  841202 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 01:21:08.749902  841202 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 01:21:08.749959  841202 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 01:21:08.750017  841202 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 01:21:08.750085  841202 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 01:21:08.750143  841202 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 01:21:08.750240  841202 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 01:21:08.750408  841202 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 01:21:08.750528  841202 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 01:21:08.750612  841202 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 01:21:08.752776  841202 out.go:252]   - Generating certificates and keys ...
	I0917 01:21:08.752899  841202 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 01:21:08.752994  841202 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 01:21:08.753166  841202 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 01:21:08.753271  841202 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 01:21:08.753363  841202 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 01:21:08.753458  841202 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 01:21:08.753543  841202 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 01:21:08.753685  841202 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.753763  841202 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 01:21:08.753955  841202 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.754090  841202 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 01:21:08.754192  841202 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 01:21:08.754257  841202 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 01:21:08.754342  841202 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 01:21:08.754430  841202 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 01:21:08.754478  841202 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 01:21:08.754527  841202 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 01:21:08.754580  841202 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 01:21:08.754625  841202 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 01:21:08.754700  841202 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 01:21:08.754755  841202 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 01:21:08.756322  841202 out.go:252]   - Booting up control plane ...
	I0917 01:21:08.756479  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 01:21:08.756610  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 01:21:08.756707  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 01:21:08.756865  841202 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 01:21:08.756981  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 01:21:08.757139  841202 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 01:21:08.757242  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 01:21:08.757292  841202 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 01:21:08.757475  841202 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 01:21:08.757598  841202 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 01:21:08.757667  841202 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.884368ms
	I0917 01:21:08.757780  841202 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 01:21:08.757913  841202 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0917 01:21:08.758047  841202 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 01:21:08.758174  841202 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 01:21:08.758291  841202 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.005156484s
	I0917 01:21:08.758398  841202 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.505889566s
	I0917 01:21:08.758508  841202 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501611145s
	I0917 01:21:08.758646  841202 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 01:21:08.758798  841202 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 01:21:08.758886  841202 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 01:21:08.759100  841202 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-333616 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 01:21:08.759198  841202 kubeadm.go:310] [bootstrap-token] Using token: 162lgr.l6wrgxxcju3qv1m6
	I0917 01:21:08.760426  841202 out.go:252]   - Configuring RBAC rules ...
	I0917 01:21:08.760541  841202 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 01:21:08.760645  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 01:21:08.760852  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 01:21:08.761023  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 01:21:08.761194  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 01:21:08.761327  841202 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 01:21:08.761559  841202 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 01:21:08.761636  841202 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 01:21:08.761697  841202 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 01:21:08.761708  841202 kubeadm.go:310] 
	I0917 01:21:08.761785  841202 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 01:21:08.761796  841202 kubeadm.go:310] 
	I0917 01:21:08.761916  841202 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 01:21:08.761932  841202 kubeadm.go:310] 
	I0917 01:21:08.761974  841202 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 01:21:08.762071  841202 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 01:21:08.762135  841202 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 01:21:08.762145  841202 kubeadm.go:310] 
	I0917 01:21:08.762215  841202 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 01:21:08.762222  841202 kubeadm.go:310] 
	I0917 01:21:08.762262  841202 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 01:21:08.762269  841202 kubeadm.go:310] 
	I0917 01:21:08.762319  841202 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 01:21:08.762431  841202 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 01:21:08.762533  841202 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 01:21:08.762551  841202 kubeadm.go:310] 
	I0917 01:21:08.762669  841202 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 01:21:08.762785  841202 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 01:21:08.762797  841202 kubeadm.go:310] 
	I0917 01:21:08.762899  841202 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763036  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 01:21:08.763072  841202 kubeadm.go:310] 	--control-plane 
	I0917 01:21:08.763080  841202 kubeadm.go:310] 
	I0917 01:21:08.763190  841202 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 01:21:08.763210  841202 kubeadm.go:310] 
	I0917 01:21:08.763278  841202 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763415  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 01:21:08.763437  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:21:08.766700  841202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 01:21:08.767858  841202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 01:21:08.773343  841202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 01:21:08.773364  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 01:21:08.793795  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 01:21:09.025565  841202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 01:21:09.025804  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:09.025927  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-333616 minikube.k8s.io/updated_at=2025_09_17T01_21_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=kindnet-333616 minikube.k8s.io/primary=true
	I0917 01:21:09.125386  841202 ops.go:34] apiserver oom_adj: -16
	I0917 01:21:09.125519  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:09.626138  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:10.126613  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:10.626037  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:11.126442  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:11.626219  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:12.125827  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:12.626205  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:13.126607  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:13.209490  841202 kubeadm.go:1105] duration metric: took 4.183732835s to wait for elevateKubeSystemPrivileges
	I0917 01:21:13.209537  841202 kubeadm.go:394] duration metric: took 15.579926785s to StartCluster
	I0917 01:21:13.209560  841202 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:21:13.209647  841202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:21:13.211405  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:21:13.211740  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 01:21:13.211739  841202 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:21:13.211827  841202 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 01:21:13.211925  841202 addons.go:69] Setting storage-provisioner=true in profile "kindnet-333616"
	I0917 01:21:13.211938  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:21:13.211959  841202 addons.go:238] Setting addon storage-provisioner=true in "kindnet-333616"
	I0917 01:21:13.211967  841202 addons.go:69] Setting default-storageclass=true in profile "kindnet-333616"
	I0917 01:21:13.211992  841202 host.go:66] Checking if "kindnet-333616" exists ...
	I0917 01:21:13.212000  841202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-333616"
	I0917 01:21:13.212458  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.212600  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.217114  841202 out.go:179] * Verifying Kubernetes components...
	I0917 01:21:13.219705  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:21:13.240699  841202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 01:21:13.241758  841202 addons.go:238] Setting addon default-storageclass=true in "kindnet-333616"
	I0917 01:21:13.242304  841202 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:21:13.242325  841202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 01:21:13.242400  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:21:13.243681  841202 host.go:66] Checking if "kindnet-333616" exists ...
	I0917 01:21:13.244225  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.282147  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:21:13.285590  841202 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 01:21:13.285680  841202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 01:21:13.285779  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:21:13.310642  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:21:13.331185  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 01:21:13.371036  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:21:13.413176  841202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:21:13.435107  841202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 01:21:13.535558  841202 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0917 01:21:13.538037  841202 node_ready.go:35] waiting up to 15m0s for node "kindnet-333616" to be "Ready" ...
	I0917 01:21:13.774449  841202 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 01:21:13.775947  841202 addons.go:514] duration metric: took 564.117634ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 01:21:14.043684  841202 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-333616" context rescaled to 1 replicas
	W0917 01:21:15.542411  841202 node_ready.go:57] node "kindnet-333616" has "Ready":"False" status (will retry)
	I0917 01:21:17.159591  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:17.198360  819928 out.go:203] 
	W0917 01:21:17.199706  819928 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:17Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:17.199729  819928 out.go:285] * 
	W0917 01:21:17.202453  819928 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:17.204643  819928 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 01:20:34 kubernetes-upgrade-790254 systemd[1]: crio.service: Failed with result 'timeout'.
	Sep 17 01:20:34 kubernetes-upgrade-790254 systemd[1]: Stopped Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:34 kubernetes-upgrade-790254 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.220784690Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.220938626Z" level=info msg="Node configuration value for hugetlb cgroup is true"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.220969518Z" level=info msg="Node configuration value for pid cgroup is true"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.221023162Z" level=info msg="Node configuration value for memoryswap cgroup is true"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.221029572Z" level=info msg="Node configuration value for cgroup v2 is true"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.226884727Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.233244211Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.233560920Z" level=info msg="[graphdriver] using prior storage driver: overlay"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.234605160Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.237386148Z" level=info msg="Conmon does support the --sync option"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.237427589Z" level=info msg="Conmon does support the --log-global-size-max option"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.239071224Z" level=info msg="Using seccomp default profile when unspecified: true"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.239096801Z" level=info msg="No seccomp profile specified, using the internal default"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.239103456Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.239146988Z" level=info msg="No blockio config file specified, blockio not configured"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.239160087Z" level=info msg="RDT not available in the host system"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.239273831Z" level=info msg="Updated default CNI network name to "
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.325999451Z" level=warning msg="Could not restore container 6036711ad65f9e4688477872c8d58a3c73c7647857e46855f207e460add4b23e: error reading container state from disk \"6036711ad65f9e4688477872c8d58a3c73c7647857e46855f207e460add4b23e\": open /var/lib/containers/storage/overlay-containers/6036711ad65f9e4688477872c8d58a3c73c7647857e46855f207e460add4b23e/userdata/state.json: no such file or directory"
	Sep 17 01:20:34 kubernetes-upgrade-790254 crio[9553]: time="2025-09-17 01:20:34.334254967Z" level=fatal msg="Failed to create new watch: too many open files"
	Sep 17 01:20:34 kubernetes-upgrade-790254 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:34 kubernetes-upgrade-790254 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:34 kubernetes-upgrade-790254 systemd[1]: crio.service: Failed with result 'exit-code'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:18Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-790254
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-790254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=kubernetes-upgrade-790254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T01_18_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 01:18:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-790254
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 01:21:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 01:21:11 +0000   Wed, 17 Sep 2025 01:18:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 01:21:11 +0000   Wed, 17 Sep 2025 01:18:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 01:21:11 +0000   Wed, 17 Sep 2025 01:18:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Sep 2025 01:21:11 +0000   Wed, 17 Sep 2025 01:19:09 +0000   KubeletNotReady              [container runtime is down, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?]
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    kubernetes-upgrade-790254
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 23e44c6416fb4e50b822813931adb4ae
	  System UUID:                cfb2ee64-dd04-44c6-9657-f643c128219e
	  Boot ID:                    0fc5663f-b128-4c7c-a0e9-9f6b9c12ae51
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://Unknown
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fc2vm                             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m14s
	  kube-system                 coredns-66bc5c9577-hspm2                             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m14s
	  kube-system                 etcd-kubernetes-upgrade-790254                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m19s
	  kube-system                 kindnet-xnggj                                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-kubernetes-upgrade-790254             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-790254    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-grvw8                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-kubernetes-upgrade-790254             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m20s  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m20s  kubelet          Node kubernetes-upgrade-790254 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s  kubelet          Node kubernetes-upgrade-790254 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s  kubelet          Node kubernetes-upgrade-790254 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m16s  node-controller  Node kubernetes-upgrade-790254 event: Registered Node kubernetes-upgrade-790254 in Controller
	  Normal   NodeReady                2m16s  kubelet          Node kubernetes-upgrade-790254 status is now: NodeReady
	  Normal   NodeNotReady             2m9s   kubelet          Node kubernetes-upgrade-790254 status is now: NodeNotReady
	  Warning  ContainerGCFailed        80s    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Warning  ContainerGCFailed        20s    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused"
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> kernel <==
	 01:21:18 up  4:03,  0 users,  load average: 2.56, 3.16, 2.37
	Linux kubernetes-upgrade-790254 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	Sep 17 01:21:13 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:13.036891    9096 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\"" filter="<nil>"
	Sep 17 01:21:13 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:13.036958    9096 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:13 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:13.036973    9096 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:13 kubernetes-upgrade-790254 kubelet[9096]: W0917 01:21:13.234481    9096 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused"
	Sep 17 01:21:13 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:13.911139    9096 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:13 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:13.911189    9096 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:14 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:14.037835    9096 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\"" filter="<nil>"
	Sep 17 01:21:14 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:14.037889    9096 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:14 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:14.037901    9096 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:15 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:15.038556    9096 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\"" filter="<nil>"
	Sep 17 01:21:15 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:15.038627    9096 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:15 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:15.038643    9096 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:15 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:15.211549    9096 kubelet.go:2451] "Skipping pod synchronization" err="container runtime is down"
	Sep 17 01:21:15 kubernetes-upgrade-790254 kubelet[9096]: W0917 01:21:15.300379    9096 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused"
	Sep 17 01:21:15 kubernetes-upgrade-790254 kubelet[9096]: W0917 01:21:15.802219    9096 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused"
	Sep 17 01:21:16 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:16.039617    9096 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\"" filter="<nil>"
	Sep 17 01:21:16 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:16.039684    9096 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:16 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:16.039700    9096 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:17 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:17.039959    9096 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\"" filter="<nil>"
	Sep 17 01:21:17 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:17.040030    9096 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:17 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:17.040049    9096 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:18 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:18.040764    9096 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\"" filter="<nil>"
	Sep 17 01:21:18 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:18.040828    9096 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:18 kubernetes-upgrade-790254 kubelet[9096]: E0917 01:21:18.040846    9096 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Sep 17 01:21:18 kubernetes-upgrade-790254 kubelet[9096]: W0917 01:21:18.058302    9096 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: connection refused"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:17.946868  849214 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:17Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:17.981846  849214 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:17Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:18.017760  849214 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:18Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:18.051992  849214 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:18Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:18.086222  849214 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:18Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:18.121630  849214 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:18Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:18.155933  849214 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:18Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:18.203662  849214 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:18Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-790254 -n kubernetes-upgrade-790254
helpers_test.go:269: (dbg) Run:  kubectl --context kubernetes-upgrade-790254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-fc2vm coredns-66bc5c9577-hspm2 kindnet-xnggj kube-proxy-grvw8 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context kubernetes-upgrade-790254 describe pod coredns-66bc5c9577-fc2vm coredns-66bc5c9577-hspm2 kindnet-xnggj kube-proxy-grvw8 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-790254 describe pod coredns-66bc5c9577-fc2vm coredns-66bc5c9577-hspm2 kindnet-xnggj kube-proxy-grvw8 storage-provisioner: exit status 1 (69.105048ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-fc2vm" not found
	Error from server (NotFound): pods "coredns-66bc5c9577-hspm2" not found
	Error from server (NotFound): pods "kindnet-xnggj" not found
	Error from server (NotFound): pods "kube-proxy-grvw8" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context kubernetes-upgrade-790254 describe pod coredns-66bc5c9577-fc2vm coredns-66bc5c9577-hspm2 kindnet-xnggj kube-proxy-grvw8 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-790254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-790254
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-790254: (4.470242196s)
--- FAIL: TestKubernetesUpgrade (446.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-377743 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-377743 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: exit status 90 (46.507294057s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-377743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "default-k8s-diff-port-377743" primary control-plane node in "default-k8s-diff-port-377743" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:20:18.832955  834635 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:18.833093  834635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:18.833108  834635 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:18.833113  834635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:18.833323  834635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:18.833813  834635 out.go:368] Setting JSON to false
	I0917 01:20:18.835094  834635 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14562,"bootTime":1758057457,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:18.835212  834635 start.go:140] virtualization: kvm guest
	I0917 01:20:18.837605  834635 out.go:179] * [default-k8s-diff-port-377743] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:18.838955  834635 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:18.839011  834635 notify.go:220] Checking for updates...
	I0917 01:20:18.841546  834635 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:18.843061  834635 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:18.844789  834635 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:18.846305  834635 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:18.848005  834635 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:18.849801  834635 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:18.850577  834635 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:18.876862  834635 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:18.877079  834635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:18.947860  834635 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:18.934688243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:18.947982  834635 docker.go:318] overlay module found
	I0917 01:20:18.950926  834635 out.go:179] * Using the docker driver based on existing profile
	I0917 01:20:18.952227  834635 start.go:304] selected driver: docker
	I0917 01:20:18.952247  834635 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-377743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-377743 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:18.952358  834635 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:18.953061  834635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:19.016232  834635 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:19.005755299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:19.016667  834635 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:19.016705  834635 cni.go:84] Creating CNI manager for ""
	I0917 01:20:19.016789  834635 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 01:20:19.016858  834635 start.go:348] cluster config:
	{Name:default-k8s-diff-port-377743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-377743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:19.022838  834635 out.go:179] * Starting "default-k8s-diff-port-377743" primary control-plane node in "default-k8s-diff-port-377743" cluster
	I0917 01:20:19.023998  834635 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:19.025195  834635 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:19.026318  834635 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:19.026363  834635 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:19.026374  834635 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:19.026446  834635 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:19.026546  834635 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:19.026563  834635 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:19.026673  834635 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/default-k8s-diff-port-377743/config.json ...
	I0917 01:20:19.049721  834635 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:19.049752  834635 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:19.049770  834635 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:19.049802  834635 start.go:360] acquireMachinesLock for default-k8s-diff-port-377743: {Name:mka74d1ff2632440c50e7900c03845b017dff605 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:19.049880  834635 start.go:364] duration metric: took 43.956µs to acquireMachinesLock for "default-k8s-diff-port-377743"
	I0917 01:20:19.049905  834635 start.go:96] Skipping create...Using existing machine configuration
	I0917 01:20:19.049912  834635 fix.go:54] fixHost starting: 
	I0917 01:20:19.050126  834635 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-377743 --format={{.State.Status}}
	I0917 01:20:19.069777  834635 fix.go:112] recreateIfNeeded on default-k8s-diff-port-377743: state=Stopped err=<nil>
	W0917 01:20:19.069815  834635 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 01:20:19.071916  834635 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-377743" ...
	I0917 01:20:19.072011  834635 cli_runner.go:164] Run: docker start default-k8s-diff-port-377743
	I0917 01:20:19.372022  834635 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-377743 --format={{.State.Status}}
	I0917 01:20:19.409427  834635 kic.go:430] container "default-k8s-diff-port-377743" state is running.
	I0917 01:20:19.410384  834635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-377743
	I0917 01:20:19.438663  834635 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/default-k8s-diff-port-377743/config.json ...
	I0917 01:20:19.438980  834635 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:19.439069  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:19.466821  834635 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:19.467733  834635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I0917 01:20:19.467753  834635 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:19.468627  834635 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0917 01:20:22.610726  834635 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-377743
	
	I0917 01:20:22.610773  834635 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-377743"
	I0917 01:20:22.610877  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:22.633461  834635 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:22.633783  834635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I0917 01:20:22.633811  834635 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-377743 && echo "default-k8s-diff-port-377743" | sudo tee /etc/hostname
	I0917 01:20:22.786216  834635 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-377743
	
	I0917 01:20:22.786315  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:22.804534  834635 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:22.804781  834635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I0917 01:20:22.804807  834635 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-377743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-377743/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-377743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:22.942536  834635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:22.942589  834635 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:22.942623  834635 ubuntu.go:190] setting up certificates
	I0917 01:20:22.942636  834635 provision.go:84] configureAuth start
	I0917 01:20:22.942697  834635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-377743
	I0917 01:20:22.961673  834635 provision.go:143] copyHostCerts
	I0917 01:20:22.961755  834635 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:22.961775  834635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:22.961841  834635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:22.961936  834635 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:22.961946  834635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:22.961973  834635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:22.962041  834635 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:22.962049  834635 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:22.962080  834635 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:22.962142  834635 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-377743 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-377743 localhost minikube]
	I0917 01:20:23.043679  834635 provision.go:177] copyRemoteCerts
	I0917 01:20:23.043740  834635 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:23.043783  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:23.062985  834635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/default-k8s-diff-port-377743/id_rsa Username:docker}
	I0917 01:20:23.162299  834635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:23.192060  834635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0917 01:20:23.220249  834635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:23.247851  834635 provision.go:87] duration metric: took 305.193288ms to configureAuth
	I0917 01:20:23.247884  834635 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:23.248102  834635 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:23.248258  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:23.266986  834635 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:23.267218  834635 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I0917 01:20:23.267240  834635 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:23.574615  834635 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:23.574645  834635 machine.go:96] duration metric: took 4.135644867s to provisionDockerMachine
	I0917 01:20:23.574660  834635 start.go:293] postStartSetup for "default-k8s-diff-port-377743" (driver="docker")
	I0917 01:20:23.574674  834635 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:23.574736  834635 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:23.574788  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:23.597427  834635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/default-k8s-diff-port-377743/id_rsa Username:docker}
	I0917 01:20:23.697218  834635 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:23.700834  834635 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:23.700867  834635 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:23.700876  834635 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:23.700884  834635 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:23.700894  834635 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:23.700956  834635 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:23.701045  834635 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:23.701142  834635 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:23.711179  834635 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:23.738585  834635 start.go:296] duration metric: took 163.898665ms for postStartSetup
	I0917 01:20:23.738671  834635 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:23.738713  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:23.757207  834635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/default-k8s-diff-port-377743/id_rsa Username:docker}
	I0917 01:20:23.851883  834635 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:23.856567  834635 fix.go:56] duration metric: took 4.80664585s for fixHost
	I0917 01:20:23.856594  834635 start.go:83] releasing machines lock for "default-k8s-diff-port-377743", held for 4.806699507s
	I0917 01:20:23.856660  834635 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-377743
	I0917 01:20:23.876731  834635 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:23.876790  834635 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:23.876804  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:23.876853  834635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:20:23.896476  834635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/default-k8s-diff-port-377743/id_rsa Username:docker}
	I0917 01:20:23.896834  834635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/default-k8s-diff-port-377743/id_rsa Username:docker}
	I0917 01:20:23.989707  834635 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:24.067801  834635 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:24.211845  834635 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:24.217325  834635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:24.228539  834635 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:24.228617  834635 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:24.238948  834635 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 01:20:24.238974  834635 start.go:495] detecting cgroup driver to use...
	I0917 01:20:24.239013  834635 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:24.239080  834635 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:24.253955  834635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:24.267340  834635 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:24.267418  834635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:24.282399  834635 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:24.295182  834635 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:24.366101  834635 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:24.439626  834635 docker.go:234] disabling docker service ...
	I0917 01:20:24.439701  834635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:24.453571  834635 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:24.466746  834635 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:24.532992  834635 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:24.602865  834635 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:24.615687  834635 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:24.634457  834635 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:24.634529  834635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:24.646027  834635 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:24.646101  834635 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:24.657408  834635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:24.669633  834635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:24.683280  834635 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:24.694345  834635 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:24.705453  834635 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:24.716783  834635 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:24.728652  834635 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:24.739128  834635 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:24.748801  834635 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:24.835521  834635 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:24.938937  834635 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:24.939022  834635 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:24.943874  834635 start.go:563] Will wait 60s for crictl version
	I0917 01:20:24.943931  834635 ssh_runner.go:195] Run: which crictl
	I0917 01:20:24.948407  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:24.985728  834635 retry.go:31] will retry after 6.54678168s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:24Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:20:31.534529  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:31.573890  834635 retry.go:31] will retry after 21.843859683s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:31Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	* 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-377743 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-377743
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-377743:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	        "Created": "2025-09-17T01:18:46.928961651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 834842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T01:20:19.100007065Z",
	            "FinishedAt": "2025-09-17T01:20:18.156352163Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc-json.log",
	        "Name": "/default-k8s-diff-port-377743",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-377743:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-377743",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	                "LowerDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-377743",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-377743/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-377743",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "501c8109c3bc8c00897c7b54c7d2675ba4a3bb996e4f4f197def146bb8ff190a",
	            "SandboxKey": "/var/run/docker/netns/501c8109c3bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-377743": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:0c:94:b6:e5:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2391a23950fb5471e73c0e959464ddf40474359ad3a94730b27d02f587b2a08a",
	                    "EndpointID": "db62485899e89b194074d4c36c3a42a7db3e7cbeba5ca889e1cc809ec8289fa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-377743",
	                        "ce5cf21d301e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (301.210107ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:05.607906  844054 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-377743 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-333616 sudo systemctl status kubelet --all --full --no-pager                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat kubelet --no-pager                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat docker --no-pager                                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/docker/daemon.json                                                                                          │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo docker system info                                                                                                   │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cri-dockerd --version                                                                                                │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat containerd --no-pager                                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/containerd/config.toml                                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo containerd config dump                                                                                               │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat crio --no-pager                                                                                        │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo crio config                                                                                                          │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ delete  │ -p auto-333616                                                                                                                           │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ start   │ -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-333616 │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:20:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:20:46.991253  841202 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:46.991355  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991363  841202 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:46.991367  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991948  841202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:46.993103  841202 out.go:368] Setting JSON to false
	I0917 01:20:46.994427  841202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14590,"bootTime":1758057457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:46.994531  841202 start.go:140] virtualization: kvm guest
	I0917 01:20:46.996762  841202 out.go:179] * [kindnet-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:46.998033  841202 notify.go:220] Checking for updates...
	I0917 01:20:46.998040  841202 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:46.999333  841202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:47.000646  841202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:47.002223  841202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:47.003668  841202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:47.005002  841202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:47.006954  841202 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007104  841202 config.go:182] Loaded profile config "embed-certs-748988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007208  841202 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007331  841202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:47.034761  841202 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:47.034876  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.096866  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.086442486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.097016  841202 docker.go:318] overlay module found
	I0917 01:20:47.099127  841202 out.go:179] * Using the docker driver based on user configuration
	I0917 01:20:47.100598  841202 start.go:304] selected driver: docker
	I0917 01:20:47.100620  841202 start.go:918] validating driver "docker" against <nil>
	I0917 01:20:47.100634  841202 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:47.101213  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.157653  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.147017932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.157843  841202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 01:20:47.158047  841202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:47.159808  841202 out.go:179] * Using Docker driver with root privileges
	I0917 01:20:47.161165  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:47.161185  841202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 01:20:47.161271  841202 start.go:348] cluster config:
	{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInte
rval:1m0s}
	I0917 01:20:47.162725  841202 out.go:179] * Starting "kindnet-333616" primary control-plane node in "kindnet-333616" cluster
	I0917 01:20:47.164093  841202 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:47.165424  841202 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:47.166669  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.166713  841202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:47.166725  841202 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:47.166780  841202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:47.166823  841202 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:47.166834  841202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:47.166922  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:47.166937  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json: {Name:mkd38d1752014f4bab9dae52a7872fb8a5cc71fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:47.192914  841202 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:47.192938  841202 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:47.192970  841202 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:47.193004  841202 start.go:360] acquireMachinesLock for kindnet-333616: {Name:mkc24d8ed730ab1614498d5beb0270c845773667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:47.193133  841202 start.go:364] duration metric: took 104.991µs to acquireMachinesLock for "kindnet-333616"
	I0917 01:20:47.193181  841202 start.go:93] Provisioning new machine with config: &{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:20:47.193276  841202 start.go:125] createHost starting for "" (driver="docker")
	W0917 01:20:45.672555  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:47.672815  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:47.195051  841202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 01:20:47.195285  841202 start.go:159] libmachine.API.Create for "kindnet-333616" (driver="docker")
	I0917 01:20:47.195320  841202 client.go:168] LocalClient.Create starting
	I0917 01:20:47.195405  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 01:20:47.195446  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195462  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195517  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 01:20:47.195536  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195549  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195889  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 01:20:47.213519  841202 cli_runner.go:211] docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 01:20:47.213608  841202 network_create.go:284] running [docker network inspect kindnet-333616] to gather additional debugging logs...
	I0917 01:20:47.213640  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616
	W0917 01:20:47.231055  841202 cli_runner.go:211] docker network inspect kindnet-333616 returned with exit code 1
	I0917 01:20:47.231092  841202 network_create.go:287] error running [docker network inspect kindnet-333616]: docker network inspect kindnet-333616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-333616 not found
	I0917 01:20:47.231127  841202 network_create.go:289] output of [docker network inspect kindnet-333616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-333616 not found
	
	** /stderr **
	I0917 01:20:47.231231  841202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:47.249036  841202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
	I0917 01:20:47.249865  841202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7514a86599 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:c0:7e:cc:23:dc} reservation:<nil>}
	I0917 01:20:47.250378  841202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0cef36e94e8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:db:fd:7a:23:9f} reservation:<nil>}
	I0917 01:20:47.250966  841202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b9dd3e2b39a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:6a:d6:f0:80:2b} reservation:<nil>}
	I0917 01:20:47.251698  841202 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2391a23950fb IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:6b:a9:b6:cd:fd} reservation:<nil>}
	I0917 01:20:47.252201  841202 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2f0a55cba78d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:b8:6b:32:ae:3d} reservation:<nil>}
	I0917 01:20:47.253017  841202 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d90400}
	I0917 01:20:47.253041  841202 network_create.go:124] attempt to create docker network kindnet-333616 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 01:20:47.253107  841202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-333616 kindnet-333616
	I0917 01:20:47.313030  841202 network_create.go:108] docker network kindnet-333616 192.168.103.0/24 created
	I0917 01:20:47.313138  841202 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-333616" container
	I0917 01:20:47.313224  841202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 01:20:47.331726  841202 cli_runner.go:164] Run: docker volume create kindnet-333616 --label name.minikube.sigs.k8s.io=kindnet-333616 --label created_by.minikube.sigs.k8s.io=true
	I0917 01:20:47.350777  841202 oci.go:103] Successfully created a docker volume kindnet-333616
	I0917 01:20:47.350848  841202 cli_runner.go:164] Run: docker run --rm --name kindnet-333616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --entrypoint /usr/bin/test -v kindnet-333616:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 01:20:47.744926  841202 oci.go:107] Successfully prepared a docker volume kindnet-333616
	I0917 01:20:47.744972  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.744994  841202 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 01:20:47.745059  841202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	W0917 01:20:50.174804  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:52.673786  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:52.004993  841202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.25985666s)
	I0917 01:20:52.005028  841202 kic.go:203] duration metric: took 4.26003048s to extract preloaded images to volume ...
	W0917 01:20:52.005133  841202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 01:20:52.005164  841202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 01:20:52.005202  841202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 01:20:52.066749  841202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-333616 --name kindnet-333616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-333616 --network kindnet-333616 --ip 192.168.103.2 --volume kindnet-333616:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 01:20:52.362306  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Running}}
	I0917 01:20:52.383555  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.406449  841202 cli_runner.go:164] Run: docker exec kindnet-333616 stat /var/lib/dpkg/alternatives/iptables
	I0917 01:20:52.459697  841202 oci.go:144] the created container "kindnet-333616" has a running status.
	I0917 01:20:52.459737  841202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa...
	I0917 01:20:52.716503  841202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 01:20:52.742117  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.761330  841202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 01:20:52.761355  841202 kic_runner.go:114] Args: [docker exec --privileged kindnet-333616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 01:20:52.809335  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.831209  841202 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:52.831331  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:52.852889  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:52.853249  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:52.853269  841202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:52.992938  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:52.992969  841202 ubuntu.go:182] provisioning hostname "kindnet-333616"
	I0917 01:20:52.993051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.013532  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.013764  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.013778  841202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-333616 && echo "kindnet-333616" | sudo tee /etc/hostname
	I0917 01:20:53.166881  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:53.166973  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.187352  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.187631  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.187658  841202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-333616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-333616/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-333616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:53.332338  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:53.332408  841202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:53.332452  841202 ubuntu.go:190] setting up certificates
	I0917 01:20:53.332472  841202 provision.go:84] configureAuth start
	I0917 01:20:53.332570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:53.352359  841202 provision.go:143] copyHostCerts
	I0917 01:20:53.352466  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:53.352481  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:53.352553  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:53.352652  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:53.352661  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:53.352689  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:53.352759  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:53.352766  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:53.352789  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:53.352841  841202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kindnet-333616 san=[127.0.0.1 192.168.103.2 kindnet-333616 localhost minikube]
	I0917 01:20:53.973038  841202 provision.go:177] copyRemoteCerts
	I0917 01:20:53.973143  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:53.973182  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.991696  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.091426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 01:20:54.121737  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:54.150762  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:54.179160  841202 provision.go:87] duration metric: took 846.669603ms to configureAuth
	I0917 01:20:54.179187  841202 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:54.179345  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:54.179463  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.198684  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:54.198909  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:54.198925  841202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:54.444483  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:54.444511  841202 machine.go:96] duration metric: took 1.613270939s to provisionDockerMachine
	I0917 01:20:54.444522  841202 client.go:171] duration metric: took 7.249193748s to LocalClient.Create
	I0917 01:20:54.444542  841202 start.go:167] duration metric: took 7.249257601s to libmachine.API.Create "kindnet-333616"
	I0917 01:20:54.444554  841202 start.go:293] postStartSetup for "kindnet-333616" (driver="docker")
	I0917 01:20:54.444572  841202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:54.444641  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:54.444690  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.463166  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.563892  841202 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:54.567735  841202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:54.567765  841202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:54.567772  841202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:54.567782  841202 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:54.567795  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:54.567855  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:54.567966  841202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:54.568108  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:54.577885  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:54.606690  841202 start.go:296] duration metric: took 162.114963ms for postStartSetup
	I0917 01:20:54.607107  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.625322  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:54.625758  841202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:54.625821  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.643332  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.737805  841202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:54.742465  841202 start.go:128] duration metric: took 7.549168533s to createHost
	I0917 01:20:54.742494  841202 start.go:83] releasing machines lock for "kindnet-333616", held for 7.549346209s
	I0917 01:20:54.742570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.759991  841202 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:54.760051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.760083  841202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:54.760154  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.778915  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.779306  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.952563  841202 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:54.957470  841202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:55.101309  841202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:55.106493  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.131742  841202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:55.131831  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.164272  841202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 01:20:55.164303  841202 start.go:495] detecting cgroup driver to use...
	I0917 01:20:55.164352  841202 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:55.164430  841202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:55.182732  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:55.194856  841202 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:55.194918  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:55.209368  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:55.224908  841202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:55.294219  841202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:55.366744  841202 docker.go:234] disabling docker service ...
	I0917 01:20:55.366805  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:55.386004  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:55.398281  841202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:55.471097  841202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:55.620605  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:55.632936  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:55.650751  841202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:55.650813  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.665355  841202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:55.665449  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.677774  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.688724  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.700141  841202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:55.711135  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.722974  841202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.741236  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.752869  841202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:55.762991  841202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:55.772774  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:55.842833  841202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:55.939370  841202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:55.939456  841202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:55.943491  841202 start.go:563] Will wait 60s for crictl version
	I0917 01:20:55.943562  841202 ssh_runner.go:195] Run: which crictl
	I0917 01:20:55.947384  841202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:55.984137  841202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 01:20:55.984206  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.022652  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.062561  841202 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 01:20:56.063985  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:56.081880  841202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 01:20:56.086073  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.098482  841202 kubeadm.go:875] updating cluster {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:20:56.098622  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:56.098685  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.169870  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.169898  841202 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:20:56.169953  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.206753  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.206784  841202 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:20:56.206794  841202 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0917 01:20:56.206913  841202 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-333616 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 01:20:56.207001  841202 ssh_runner.go:195] Run: crio config
	I0917 01:20:56.253538  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:56.253567  841202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:20:56.253590  841202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-333616 NodeName:kindnet-333616 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:20:56.253716  841202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-333616"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:20:56.253775  841202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:20:56.264146  841202 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:20:56.264224  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:20:56.274749  841202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0917 01:20:56.293906  841202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:20:56.316487  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 01:20:56.336550  841202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:20:56.340325  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.352936  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:56.418882  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:20:56.445037  841202 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616 for IP: 192.168.103.2
	I0917 01:20:56.445069  841202 certs.go:194] generating shared ca certs ...
	I0917 01:20:56.445096  841202 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.445265  841202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 01:20:56.445328  841202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 01:20:56.445342  841202 certs.go:256] generating profile certs ...
	I0917 01:20:56.445433  841202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key
	I0917 01:20:56.445452  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt with IP's: []
	I0917 01:20:56.575658  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt ...
	I0917 01:20:56.575692  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt: {Name:mke4c01e2ad680ec95da34129972695bc352dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.575918  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key ...
	I0917 01:20:56.575935  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key: {Name:mk196e199bf8e509067e257fa5978cc4017a9515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.576063  841202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883
	I0917 01:20:56.576083  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 01:20:56.891743  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 ...
	I0917 01:20:56.891776  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883: {Name:mk080638a3e062c43555f3e1bbede660cca9c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.891955  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 ...
	I0917 01:20:56.891969  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883: {Name:mkbe71ad29db0d31be773639ab90fdd03d84b089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.892043  841202 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt
	I0917 01:20:56.892145  841202 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key
	I0917 01:20:56.892212  841202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key
	I0917 01:20:56.892228  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt with IP's: []
	W0917 01:20:55.172587  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:57.173997  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:59.673374  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:57.205489  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt ...
	I0917 01:20:57.205524  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt: {Name:mkf6b5ecd44d0faf20e6e53acc7eeebe333eca17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205728  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key ...
	I0917 01:20:57.205746  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key: {Name:mk2b3f753e527ada6b46c8fd672f3b210e243668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205983  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 01:20:57.206033  841202 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 01:20:57.206049  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:20:57.206079  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:20:57.206110  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:20:57.206143  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 01:20:57.206196  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:57.206849  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:20:57.236316  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:20:57.264039  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:20:57.290903  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:20:57.316649  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:20:57.343336  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:20:57.369426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:20:57.395757  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:20:57.422129  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 01:20:57.452169  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:20:57.479060  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 01:20:57.505045  841202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:20:57.524210  841202 ssh_runner.go:195] Run: openssl version
	I0917 01:20:57.530236  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:20:57.540421  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544062  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544118  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.551188  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:20:57.561515  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 01:20:57.572283  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576261  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576323  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.583692  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 01:20:57.593924  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 01:20:57.604001  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608154  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608211  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.615475  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:20:57.625656  841202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:20:57.629541  841202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:20:57.629606  841202 kubeadm.go:392] StartCluster: {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:57.629685  841202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:20:57.629748  841202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:20:57.668306  841202 cri.go:89] found id: ""
	I0917 01:20:57.668384  841202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:20:57.679315  841202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:20:57.689592  841202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 01:20:57.689666  841202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:20:57.699255  841202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:20:57.699272  841202 kubeadm.go:157] found existing configuration files:
	
	I0917 01:20:57.699327  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:20:57.708879  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:20:57.708950  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:20:57.718406  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:20:57.728172  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:20:57.728251  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:20:57.737991  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.748427  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:20:57.748487  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.757822  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:20:57.767640  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:20:57.767708  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:20:57.776934  841202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 01:20:57.849477  841202 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 01:20:57.909176  841202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:21:01.172820  832418 pod_ready.go:94] pod "coredns-66bc5c9577-qqxrk" is "Ready"
	I0917 01:21:01.172851  832418 pod_ready.go:86] duration metric: took 38.505527826s for pod "coredns-66bc5c9577-qqxrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.175617  832418 pod_ready.go:83] waiting for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.179752  832418 pod_ready.go:94] pod "etcd-embed-certs-748988" is "Ready"
	I0917 01:21:01.179779  832418 pod_ready.go:86] duration metric: took 4.135657ms for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.182426  832418 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.186899  832418 pod_ready.go:94] pod "kube-apiserver-embed-certs-748988" is "Ready"
	I0917 01:21:01.186928  832418 pod_ready.go:86] duration metric: took 4.474792ms for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.189100  832418 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.371319  832418 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748988" is "Ready"
	I0917 01:21:01.371352  832418 pod_ready.go:86] duration metric: took 182.22498ms for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.570958  832418 pod_ready.go:83] waiting for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.970376  832418 pod_ready.go:94] pod "kube-proxy-2bkdq" is "Ready"
	I0917 01:21:01.970432  832418 pod_ready.go:86] duration metric: took 399.444446ms for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.171077  832418 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570435  832418 pod_ready.go:94] pod "kube-scheduler-embed-certs-748988" is "Ready"
	I0917 01:21:02.570467  832418 pod_ready.go:86] duration metric: took 399.360883ms for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570484  832418 pod_ready.go:40] duration metric: took 39.908444834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:21:02.617522  832418 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:21:02.619899  832418 out.go:179] * Done! kubectl is now configured to use "embed-certs-748988" cluster and "default" namespace by default
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 
	
	
	==> CRI-O <==
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900219900Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900375290Z" level=info msg="Node configuration value for hugetlb cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900412189Z" level=info msg="Node configuration value for pid cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900479004Z" level=info msg="Node configuration value for memoryswap cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900490617Z" level=info msg="Node configuration value for cgroup v2 is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.906797224Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913464400Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913750835Z" level=info msg="[graphdriver] using prior storage driver: overlay"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.914768261Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917639624Z" level=info msg="Conmon does support the --sync option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917673061Z" level=info msg="Conmon does support the --log-global-size-max option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919571823Z" level=info msg="Using seccomp default profile when unspecified: true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919593931Z" level=info msg="No seccomp profile specified, using the internal default"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919603651Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919611839Z" level=info msg="No blockio config file specified, blockio not configured"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919618928Z" level=info msg="RDT not available in the host system"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924637958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924675992Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.937133863Z" level=fatal msg="too many open files"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:06Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8444 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> kernel <==
	 01:21:06 up  4:03,  0 users,  load average: 2.84, 3.23, 2.38
	Linux default-k8s-diff-port-377743 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:05.915990  844166 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:05.948916  844166 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:05.981486  844166 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:06.014165  844166 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:06Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:06.049131  844166 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:06Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:06.082639  844166 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:06Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:06.114386  844166 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:06Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:06.147523  844166 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:06Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:06.181339  844166 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:06Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (328.623895ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:06.745288  844415 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "default-k8s-diff-port-377743" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-377743" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-377743
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-377743:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	        "Created": "2025-09-17T01:18:46.928961651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 834842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T01:20:19.100007065Z",
	            "FinishedAt": "2025-09-17T01:20:18.156352163Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc-json.log",
	        "Name": "/default-k8s-diff-port-377743",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-377743:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-377743",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	                "LowerDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-377743",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-377743/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-377743",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "501c8109c3bc8c00897c7b54c7d2675ba4a3bb996e4f4f197def146bb8ff190a",
	            "SandboxKey": "/var/run/docker/netns/501c8109c3bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-377743": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:0c:94:b6:e5:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2391a23950fb5471e73c0e959464ddf40474359ad3a94730b27d02f587b2a08a",
	                    "EndpointID": "db62485899e89b194074d4c36c3a42a7db3e7cbeba5ca889e1cc809ec8289fa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-377743",
	                        "ce5cf21d301e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (311.449306ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:07.085187  844530 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-377743 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-333616 sudo systemctl status kubelet --all --full --no-pager                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat kubelet --no-pager                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat docker --no-pager                                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/docker/daemon.json                                                                                          │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo docker system info                                                                                                   │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cri-dockerd --version                                                                                                │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat containerd --no-pager                                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/containerd/config.toml                                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo containerd config dump                                                                                               │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat crio --no-pager                                                                                        │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo crio config                                                                                                          │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ delete  │ -p auto-333616                                                                                                                           │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ start   │ -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-333616 │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:20:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:20:46.991253  841202 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:46.991355  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991363  841202 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:46.991367  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991948  841202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:46.993103  841202 out.go:368] Setting JSON to false
	I0917 01:20:46.994427  841202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14590,"bootTime":1758057457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:46.994531  841202 start.go:140] virtualization: kvm guest
	I0917 01:20:46.996762  841202 out.go:179] * [kindnet-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:46.998033  841202 notify.go:220] Checking for updates...
	I0917 01:20:46.998040  841202 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:46.999333  841202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:47.000646  841202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:47.002223  841202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:47.003668  841202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:47.005002  841202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:47.006954  841202 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007104  841202 config.go:182] Loaded profile config "embed-certs-748988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007208  841202 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007331  841202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:47.034761  841202 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:47.034876  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.096866  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.086442486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.097016  841202 docker.go:318] overlay module found
	I0917 01:20:47.099127  841202 out.go:179] * Using the docker driver based on user configuration
	I0917 01:20:47.100598  841202 start.go:304] selected driver: docker
	I0917 01:20:47.100620  841202 start.go:918] validating driver "docker" against <nil>
	I0917 01:20:47.100634  841202 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:47.101213  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.157653  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.147017932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.157843  841202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 01:20:47.158047  841202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:47.159808  841202 out.go:179] * Using Docker driver with root privileges
	I0917 01:20:47.161165  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:47.161185  841202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 01:20:47.161271  841202 start.go:348] cluster config:
	{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInte
rval:1m0s}
	I0917 01:20:47.162725  841202 out.go:179] * Starting "kindnet-333616" primary control-plane node in "kindnet-333616" cluster
	I0917 01:20:47.164093  841202 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:47.165424  841202 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:47.166669  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.166713  841202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:47.166725  841202 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:47.166780  841202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:47.166823  841202 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:47.166834  841202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:47.166922  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:47.166937  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json: {Name:mkd38d1752014f4bab9dae52a7872fb8a5cc71fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:47.192914  841202 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:47.192938  841202 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:47.192970  841202 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:47.193004  841202 start.go:360] acquireMachinesLock for kindnet-333616: {Name:mkc24d8ed730ab1614498d5beb0270c845773667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:47.193133  841202 start.go:364] duration metric: took 104.991µs to acquireMachinesLock for "kindnet-333616"
	I0917 01:20:47.193181  841202 start.go:93] Provisioning new machine with config: &{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:20:47.193276  841202 start.go:125] createHost starting for "" (driver="docker")
	W0917 01:20:45.672555  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:47.672815  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:47.195051  841202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 01:20:47.195285  841202 start.go:159] libmachine.API.Create for "kindnet-333616" (driver="docker")
	I0917 01:20:47.195320  841202 client.go:168] LocalClient.Create starting
	I0917 01:20:47.195405  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 01:20:47.195446  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195462  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195517  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 01:20:47.195536  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195549  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195889  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 01:20:47.213519  841202 cli_runner.go:211] docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 01:20:47.213608  841202 network_create.go:284] running [docker network inspect kindnet-333616] to gather additional debugging logs...
	I0917 01:20:47.213640  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616
	W0917 01:20:47.231055  841202 cli_runner.go:211] docker network inspect kindnet-333616 returned with exit code 1
	I0917 01:20:47.231092  841202 network_create.go:287] error running [docker network inspect kindnet-333616]: docker network inspect kindnet-333616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-333616 not found
	I0917 01:20:47.231127  841202 network_create.go:289] output of [docker network inspect kindnet-333616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-333616 not found
	
	** /stderr **
	I0917 01:20:47.231231  841202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:47.249036  841202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
	I0917 01:20:47.249865  841202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7514a86599 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:c0:7e:cc:23:dc} reservation:<nil>}
	I0917 01:20:47.250378  841202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0cef36e94e8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:db:fd:7a:23:9f} reservation:<nil>}
	I0917 01:20:47.250966  841202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b9dd3e2b39a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:6a:d6:f0:80:2b} reservation:<nil>}
	I0917 01:20:47.251698  841202 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2391a23950fb IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:6b:a9:b6:cd:fd} reservation:<nil>}
	I0917 01:20:47.252201  841202 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2f0a55cba78d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:b8:6b:32:ae:3d} reservation:<nil>}
	I0917 01:20:47.253017  841202 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d90400}
	I0917 01:20:47.253041  841202 network_create.go:124] attempt to create docker network kindnet-333616 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 01:20:47.253107  841202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-333616 kindnet-333616
	I0917 01:20:47.313030  841202 network_create.go:108] docker network kindnet-333616 192.168.103.0/24 created
	I0917 01:20:47.313138  841202 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-333616" container
	I0917 01:20:47.313224  841202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 01:20:47.331726  841202 cli_runner.go:164] Run: docker volume create kindnet-333616 --label name.minikube.sigs.k8s.io=kindnet-333616 --label created_by.minikube.sigs.k8s.io=true
	I0917 01:20:47.350777  841202 oci.go:103] Successfully created a docker volume kindnet-333616
	I0917 01:20:47.350848  841202 cli_runner.go:164] Run: docker run --rm --name kindnet-333616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --entrypoint /usr/bin/test -v kindnet-333616:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 01:20:47.744926  841202 oci.go:107] Successfully prepared a docker volume kindnet-333616
	I0917 01:20:47.744972  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.744994  841202 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 01:20:47.745059  841202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	W0917 01:20:50.174804  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:52.673786  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:52.004993  841202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.25985666s)
	I0917 01:20:52.005028  841202 kic.go:203] duration metric: took 4.26003048s to extract preloaded images to volume ...
	W0917 01:20:52.005133  841202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 01:20:52.005164  841202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 01:20:52.005202  841202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 01:20:52.066749  841202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-333616 --name kindnet-333616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-333616 --network kindnet-333616 --ip 192.168.103.2 --volume kindnet-333616:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 01:20:52.362306  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Running}}
	I0917 01:20:52.383555  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.406449  841202 cli_runner.go:164] Run: docker exec kindnet-333616 stat /var/lib/dpkg/alternatives/iptables
	I0917 01:20:52.459697  841202 oci.go:144] the created container "kindnet-333616" has a running status.
	I0917 01:20:52.459737  841202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa...
	I0917 01:20:52.716503  841202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 01:20:52.742117  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.761330  841202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 01:20:52.761355  841202 kic_runner.go:114] Args: [docker exec --privileged kindnet-333616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 01:20:52.809335  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.831209  841202 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:52.831331  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:52.852889  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:52.853249  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:52.853269  841202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:52.992938  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:52.992969  841202 ubuntu.go:182] provisioning hostname "kindnet-333616"
	I0917 01:20:52.993051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.013532  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.013764  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.013778  841202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-333616 && echo "kindnet-333616" | sudo tee /etc/hostname
	I0917 01:20:53.166881  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:53.166973  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.187352  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.187631  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.187658  841202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-333616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-333616/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-333616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:53.332338  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:53.332408  841202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:53.332452  841202 ubuntu.go:190] setting up certificates
	I0917 01:20:53.332472  841202 provision.go:84] configureAuth start
	I0917 01:20:53.332570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:53.352359  841202 provision.go:143] copyHostCerts
	I0917 01:20:53.352466  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:53.352481  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:53.352553  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:53.352652  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:53.352661  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:53.352689  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:53.352759  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:53.352766  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:53.352789  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:53.352841  841202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kindnet-333616 san=[127.0.0.1 192.168.103.2 kindnet-333616 localhost minikube]
	I0917 01:20:53.973038  841202 provision.go:177] copyRemoteCerts
	I0917 01:20:53.973143  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:53.973182  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.991696  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.091426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 01:20:54.121737  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:54.150762  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:54.179160  841202 provision.go:87] duration metric: took 846.669603ms to configureAuth
	I0917 01:20:54.179187  841202 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:54.179345  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:54.179463  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.198684  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:54.198909  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:54.198925  841202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:54.444483  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:54.444511  841202 machine.go:96] duration metric: took 1.613270939s to provisionDockerMachine
	I0917 01:20:54.444522  841202 client.go:171] duration metric: took 7.249193748s to LocalClient.Create
	I0917 01:20:54.444542  841202 start.go:167] duration metric: took 7.249257601s to libmachine.API.Create "kindnet-333616"
	I0917 01:20:54.444554  841202 start.go:293] postStartSetup for "kindnet-333616" (driver="docker")
	I0917 01:20:54.444572  841202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:54.444641  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:54.444690  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.463166  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.563892  841202 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:54.567735  841202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:54.567765  841202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:54.567772  841202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:54.567782  841202 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:54.567795  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:54.567855  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:54.567966  841202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:54.568108  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:54.577885  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:54.606690  841202 start.go:296] duration metric: took 162.114963ms for postStartSetup
	I0917 01:20:54.607107  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.625322  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:54.625758  841202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:54.625821  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.643332  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.737805  841202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:54.742465  841202 start.go:128] duration metric: took 7.549168533s to createHost
	I0917 01:20:54.742494  841202 start.go:83] releasing machines lock for "kindnet-333616", held for 7.549346209s
	I0917 01:20:54.742570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.759991  841202 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:54.760051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.760083  841202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:54.760154  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.778915  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.779306  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.952563  841202 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:54.957470  841202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:55.101309  841202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:55.106493  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.131742  841202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:55.131831  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.164272  841202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 01:20:55.164303  841202 start.go:495] detecting cgroup driver to use...
	I0917 01:20:55.164352  841202 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:55.164430  841202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:55.182732  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:55.194856  841202 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:55.194918  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:55.209368  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:55.224908  841202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:55.294219  841202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:55.366744  841202 docker.go:234] disabling docker service ...
	I0917 01:20:55.366805  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:55.386004  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:55.398281  841202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:55.471097  841202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:55.620605  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:55.632936  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:55.650751  841202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:55.650813  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.665355  841202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:55.665449  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.677774  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.688724  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.700141  841202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:55.711135  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.722974  841202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.741236  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.752869  841202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:55.762991  841202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:55.772774  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:55.842833  841202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:55.939370  841202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:55.939456  841202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:55.943491  841202 start.go:563] Will wait 60s for crictl version
	I0917 01:20:55.943562  841202 ssh_runner.go:195] Run: which crictl
	I0917 01:20:55.947384  841202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:55.984137  841202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 01:20:55.984206  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.022652  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.062561  841202 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 01:20:56.063985  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:56.081880  841202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 01:20:56.086073  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.098482  841202 kubeadm.go:875] updating cluster {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:20:56.098622  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:56.098685  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.169870  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.169898  841202 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:20:56.169953  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.206753  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.206784  841202 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:20:56.206794  841202 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0917 01:20:56.206913  841202 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-333616 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 01:20:56.207001  841202 ssh_runner.go:195] Run: crio config
	I0917 01:20:56.253538  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:56.253567  841202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:20:56.253590  841202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-333616 NodeName:kindnet-333616 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:20:56.253716  841202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-333616"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:20:56.253775  841202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:20:56.264146  841202 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:20:56.264224  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:20:56.274749  841202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0917 01:20:56.293906  841202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:20:56.316487  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 01:20:56.336550  841202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:20:56.340325  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.352936  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:56.418882  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:20:56.445037  841202 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616 for IP: 192.168.103.2
	I0917 01:20:56.445069  841202 certs.go:194] generating shared ca certs ...
	I0917 01:20:56.445096  841202 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.445265  841202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 01:20:56.445328  841202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 01:20:56.445342  841202 certs.go:256] generating profile certs ...
	I0917 01:20:56.445433  841202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key
	I0917 01:20:56.445452  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt with IP's: []
	I0917 01:20:56.575658  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt ...
	I0917 01:20:56.575692  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt: {Name:mke4c01e2ad680ec95da34129972695bc352dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.575918  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key ...
	I0917 01:20:56.575935  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key: {Name:mk196e199bf8e509067e257fa5978cc4017a9515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.576063  841202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883
	I0917 01:20:56.576083  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 01:20:56.891743  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 ...
	I0917 01:20:56.891776  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883: {Name:mk080638a3e062c43555f3e1bbede660cca9c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.891955  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 ...
	I0917 01:20:56.891969  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883: {Name:mkbe71ad29db0d31be773639ab90fdd03d84b089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.892043  841202 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt
	I0917 01:20:56.892145  841202 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key
	I0917 01:20:56.892212  841202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key
	I0917 01:20:56.892228  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt with IP's: []
	W0917 01:20:55.172587  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:57.173997  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:59.673374  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:57.205489  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt ...
	I0917 01:20:57.205524  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt: {Name:mkf6b5ecd44d0faf20e6e53acc7eeebe333eca17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205728  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key ...
	I0917 01:20:57.205746  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key: {Name:mk2b3f753e527ada6b46c8fd672f3b210e243668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205983  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 01:20:57.206033  841202 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 01:20:57.206049  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:20:57.206079  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:20:57.206110  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:20:57.206143  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 01:20:57.206196  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:57.206849  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:20:57.236316  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:20:57.264039  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:20:57.290903  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:20:57.316649  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:20:57.343336  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:20:57.369426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:20:57.395757  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:20:57.422129  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 01:20:57.452169  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:20:57.479060  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 01:20:57.505045  841202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:20:57.524210  841202 ssh_runner.go:195] Run: openssl version
	I0917 01:20:57.530236  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:20:57.540421  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544062  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544118  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.551188  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:20:57.561515  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 01:20:57.572283  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576261  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576323  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.583692  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 01:20:57.593924  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 01:20:57.604001  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608154  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608211  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.615475  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:20:57.625656  841202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:20:57.629541  841202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:20:57.629606  841202 kubeadm.go:392] StartCluster: {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:57.629685  841202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:20:57.629748  841202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:20:57.668306  841202 cri.go:89] found id: ""
	I0917 01:20:57.668384  841202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:20:57.679315  841202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:20:57.689592  841202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 01:20:57.689666  841202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:20:57.699255  841202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:20:57.699272  841202 kubeadm.go:157] found existing configuration files:
	
	I0917 01:20:57.699327  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:20:57.708879  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:20:57.708950  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:20:57.718406  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:20:57.728172  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:20:57.728251  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:20:57.737991  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.748427  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:20:57.748487  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.757822  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:20:57.767640  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:20:57.767708  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:20:57.776934  841202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 01:20:57.849477  841202 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 01:20:57.909176  841202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:21:01.172820  832418 pod_ready.go:94] pod "coredns-66bc5c9577-qqxrk" is "Ready"
	I0917 01:21:01.172851  832418 pod_ready.go:86] duration metric: took 38.505527826s for pod "coredns-66bc5c9577-qqxrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.175617  832418 pod_ready.go:83] waiting for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.179752  832418 pod_ready.go:94] pod "etcd-embed-certs-748988" is "Ready"
	I0917 01:21:01.179779  832418 pod_ready.go:86] duration metric: took 4.135657ms for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.182426  832418 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.186899  832418 pod_ready.go:94] pod "kube-apiserver-embed-certs-748988" is "Ready"
	I0917 01:21:01.186928  832418 pod_ready.go:86] duration metric: took 4.474792ms for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.189100  832418 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.371319  832418 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748988" is "Ready"
	I0917 01:21:01.371352  832418 pod_ready.go:86] duration metric: took 182.22498ms for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.570958  832418 pod_ready.go:83] waiting for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.970376  832418 pod_ready.go:94] pod "kube-proxy-2bkdq" is "Ready"
	I0917 01:21:01.970432  832418 pod_ready.go:86] duration metric: took 399.444446ms for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.171077  832418 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570435  832418 pod_ready.go:94] pod "kube-scheduler-embed-certs-748988" is "Ready"
	I0917 01:21:02.570467  832418 pod_ready.go:86] duration metric: took 399.360883ms for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570484  832418 pod_ready.go:40] duration metric: took 39.908444834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:21:02.617522  832418 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:21:02.619899  832418 out.go:179] * Done! kubectl is now configured to use "embed-certs-748988" cluster and "default" namespace by default
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 
	I0917 01:21:02.051660  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:02.085236  819928 retry.go:31] will retry after 15.073168141s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	
	==> CRI-O <==
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900219900Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900375290Z" level=info msg="Node configuration value for hugetlb cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900412189Z" level=info msg="Node configuration value for pid cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900479004Z" level=info msg="Node configuration value for memoryswap cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900490617Z" level=info msg="Node configuration value for cgroup v2 is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.906797224Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913464400Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913750835Z" level=info msg="[graphdriver] using prior storage driver: overlay"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.914768261Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917639624Z" level=info msg="Conmon does support the --sync option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917673061Z" level=info msg="Conmon does support the --log-global-size-max option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919571823Z" level=info msg="Using seccomp default profile when unspecified: true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919593931Z" level=info msg="No seccomp profile specified, using the internal default"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919603651Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919611839Z" level=info msg="No blockio config file specified, blockio not configured"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919618928Z" level=info msg="RDT not available in the host system"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924637958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924675992Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.937133863Z" level=fatal msg="too many open files"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8444 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> kernel <==
	 01:21:07 up  4:03,  0 users,  load average: 2.84, 3.23, 2.38
	Linux default-k8s-diff-port-377743 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:07.403996  844635 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.440671  844635 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.473527  844635 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.505999  844635 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.539746  844635 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.573607  844635 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.606362  844635 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.639739  844635 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:07.673645  844635 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:07Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
E0917 01:21:08.196576  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (317.500399ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:08.227383  844905 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "default-k8s-diff-port-377743" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (1.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-377743" does not exist
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-377743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-377743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (48.022773ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-377743" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-377743 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-377743
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-377743:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	        "Created": "2025-09-17T01:18:46.928961651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 834842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T01:20:19.100007065Z",
	            "FinishedAt": "2025-09-17T01:20:18.156352163Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc-json.log",
	        "Name": "/default-k8s-diff-port-377743",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-377743:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-377743",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	                "LowerDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-377743",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-377743/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-377743",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "501c8109c3bc8c00897c7b54c7d2675ba4a3bb996e4f4f197def146bb8ff190a",
	            "SandboxKey": "/var/run/docker/netns/501c8109c3bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-377743": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:0c:94:b6:e5:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2391a23950fb5471e73c0e959464ddf40474359ad3a94730b27d02f587b2a08a",
	                    "EndpointID": "db62485899e89b194074d4c36c3a42a7db3e7cbeba5ca889e1cc809ec8289fa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-377743",
	                        "ce5cf21d301e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (300.608428ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:08.597725  845057 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-377743 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-333616 sudo systemctl status kubelet --all --full --no-pager                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat kubelet --no-pager                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat docker --no-pager                                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/docker/daemon.json                                                                                          │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo docker system info                                                                                                   │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cri-dockerd --version                                                                                                │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat containerd --no-pager                                                                                  │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/containerd/config.toml                                                                                      │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo containerd config dump                                                                                               │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat crio --no-pager                                                                                        │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo crio config                                                                                                          │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ delete  │ -p auto-333616                                                                                                                           │ auto-333616    │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ start   │ -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-333616 │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:20:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:20:46.991253  841202 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:46.991355  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991363  841202 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:46.991367  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991948  841202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:46.993103  841202 out.go:368] Setting JSON to false
	I0917 01:20:46.994427  841202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14590,"bootTime":1758057457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:46.994531  841202 start.go:140] virtualization: kvm guest
	I0917 01:20:46.996762  841202 out.go:179] * [kindnet-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:46.998033  841202 notify.go:220] Checking for updates...
	I0917 01:20:46.998040  841202 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:46.999333  841202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:47.000646  841202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:47.002223  841202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:47.003668  841202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:47.005002  841202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:47.006954  841202 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007104  841202 config.go:182] Loaded profile config "embed-certs-748988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007208  841202 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007331  841202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:47.034761  841202 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:47.034876  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.096866  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.086442486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.097016  841202 docker.go:318] overlay module found
	I0917 01:20:47.099127  841202 out.go:179] * Using the docker driver based on user configuration
	I0917 01:20:47.100598  841202 start.go:304] selected driver: docker
	I0917 01:20:47.100620  841202 start.go:918] validating driver "docker" against <nil>
	I0917 01:20:47.100634  841202 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:47.101213  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.157653  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.147017932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.157843  841202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 01:20:47.158047  841202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:47.159808  841202 out.go:179] * Using Docker driver with root privileges
	I0917 01:20:47.161165  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:47.161185  841202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 01:20:47.161271  841202 start.go:348] cluster config:
	{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInte
rval:1m0s}
	I0917 01:20:47.162725  841202 out.go:179] * Starting "kindnet-333616" primary control-plane node in "kindnet-333616" cluster
	I0917 01:20:47.164093  841202 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:47.165424  841202 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:47.166669  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.166713  841202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:47.166725  841202 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:47.166780  841202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:47.166823  841202 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:47.166834  841202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:47.166922  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:47.166937  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json: {Name:mkd38d1752014f4bab9dae52a7872fb8a5cc71fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:47.192914  841202 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:47.192938  841202 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:47.192970  841202 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:47.193004  841202 start.go:360] acquireMachinesLock for kindnet-333616: {Name:mkc24d8ed730ab1614498d5beb0270c845773667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:47.193133  841202 start.go:364] duration metric: took 104.991µs to acquireMachinesLock for "kindnet-333616"
	I0917 01:20:47.193181  841202 start.go:93] Provisioning new machine with config: &{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:20:47.193276  841202 start.go:125] createHost starting for "" (driver="docker")
	W0917 01:20:45.672555  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:47.672815  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:47.195051  841202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 01:20:47.195285  841202 start.go:159] libmachine.API.Create for "kindnet-333616" (driver="docker")
	I0917 01:20:47.195320  841202 client.go:168] LocalClient.Create starting
	I0917 01:20:47.195405  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 01:20:47.195446  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195462  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195517  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 01:20:47.195536  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195549  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195889  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 01:20:47.213519  841202 cli_runner.go:211] docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 01:20:47.213608  841202 network_create.go:284] running [docker network inspect kindnet-333616] to gather additional debugging logs...
	I0917 01:20:47.213640  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616
	W0917 01:20:47.231055  841202 cli_runner.go:211] docker network inspect kindnet-333616 returned with exit code 1
	I0917 01:20:47.231092  841202 network_create.go:287] error running [docker network inspect kindnet-333616]: docker network inspect kindnet-333616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-333616 not found
	I0917 01:20:47.231127  841202 network_create.go:289] output of [docker network inspect kindnet-333616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-333616 not found
	
	** /stderr **
	I0917 01:20:47.231231  841202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:47.249036  841202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
	I0917 01:20:47.249865  841202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7514a86599 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:c0:7e:cc:23:dc} reservation:<nil>}
	I0917 01:20:47.250378  841202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0cef36e94e8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:db:fd:7a:23:9f} reservation:<nil>}
	I0917 01:20:47.250966  841202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b9dd3e2b39a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:6a:d6:f0:80:2b} reservation:<nil>}
	I0917 01:20:47.251698  841202 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2391a23950fb IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:6b:a9:b6:cd:fd} reservation:<nil>}
	I0917 01:20:47.252201  841202 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2f0a55cba78d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:b8:6b:32:ae:3d} reservation:<nil>}
	I0917 01:20:47.253017  841202 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d90400}
	I0917 01:20:47.253041  841202 network_create.go:124] attempt to create docker network kindnet-333616 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 01:20:47.253107  841202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-333616 kindnet-333616
	I0917 01:20:47.313030  841202 network_create.go:108] docker network kindnet-333616 192.168.103.0/24 created
	I0917 01:20:47.313138  841202 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-333616" container
	I0917 01:20:47.313224  841202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 01:20:47.331726  841202 cli_runner.go:164] Run: docker volume create kindnet-333616 --label name.minikube.sigs.k8s.io=kindnet-333616 --label created_by.minikube.sigs.k8s.io=true
	I0917 01:20:47.350777  841202 oci.go:103] Successfully created a docker volume kindnet-333616
	I0917 01:20:47.350848  841202 cli_runner.go:164] Run: docker run --rm --name kindnet-333616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --entrypoint /usr/bin/test -v kindnet-333616:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 01:20:47.744926  841202 oci.go:107] Successfully prepared a docker volume kindnet-333616
	I0917 01:20:47.744972  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.744994  841202 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 01:20:47.745059  841202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	W0917 01:20:50.174804  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:52.673786  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:52.004993  841202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.25985666s)
	I0917 01:20:52.005028  841202 kic.go:203] duration metric: took 4.26003048s to extract preloaded images to volume ...
	W0917 01:20:52.005133  841202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 01:20:52.005164  841202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 01:20:52.005202  841202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 01:20:52.066749  841202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-333616 --name kindnet-333616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-333616 --network kindnet-333616 --ip 192.168.103.2 --volume kindnet-333616:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 01:20:52.362306  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Running}}
	I0917 01:20:52.383555  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.406449  841202 cli_runner.go:164] Run: docker exec kindnet-333616 stat /var/lib/dpkg/alternatives/iptables
	I0917 01:20:52.459697  841202 oci.go:144] the created container "kindnet-333616" has a running status.
	I0917 01:20:52.459737  841202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa...
	I0917 01:20:52.716503  841202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 01:20:52.742117  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.761330  841202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 01:20:52.761355  841202 kic_runner.go:114] Args: [docker exec --privileged kindnet-333616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 01:20:52.809335  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.831209  841202 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:52.831331  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:52.852889  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:52.853249  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:52.853269  841202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:52.992938  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:52.992969  841202 ubuntu.go:182] provisioning hostname "kindnet-333616"
	I0917 01:20:52.993051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.013532  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.013764  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.013778  841202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-333616 && echo "kindnet-333616" | sudo tee /etc/hostname
	I0917 01:20:53.166881  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:53.166973  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.187352  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.187631  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.187658  841202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-333616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-333616/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-333616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:53.332338  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:53.332408  841202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:53.332452  841202 ubuntu.go:190] setting up certificates
	I0917 01:20:53.332472  841202 provision.go:84] configureAuth start
	I0917 01:20:53.332570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:53.352359  841202 provision.go:143] copyHostCerts
	I0917 01:20:53.352466  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:53.352481  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:53.352553  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:53.352652  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:53.352661  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:53.352689  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:53.352759  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:53.352766  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:53.352789  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:53.352841  841202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kindnet-333616 san=[127.0.0.1 192.168.103.2 kindnet-333616 localhost minikube]
	I0917 01:20:53.973038  841202 provision.go:177] copyRemoteCerts
	I0917 01:20:53.973143  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:53.973182  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.991696  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.091426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 01:20:54.121737  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:54.150762  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:54.179160  841202 provision.go:87] duration metric: took 846.669603ms to configureAuth
	I0917 01:20:54.179187  841202 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:54.179345  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:54.179463  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.198684  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:54.198909  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:54.198925  841202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:54.444483  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:54.444511  841202 machine.go:96] duration metric: took 1.613270939s to provisionDockerMachine
	I0917 01:20:54.444522  841202 client.go:171] duration metric: took 7.249193748s to LocalClient.Create
	I0917 01:20:54.444542  841202 start.go:167] duration metric: took 7.249257601s to libmachine.API.Create "kindnet-333616"
	I0917 01:20:54.444554  841202 start.go:293] postStartSetup for "kindnet-333616" (driver="docker")
	I0917 01:20:54.444572  841202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:54.444641  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:54.444690  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.463166  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.563892  841202 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:54.567735  841202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:54.567765  841202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:54.567772  841202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:54.567782  841202 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:54.567795  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:54.567855  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:54.567966  841202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:54.568108  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:54.577885  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:54.606690  841202 start.go:296] duration metric: took 162.114963ms for postStartSetup
	I0917 01:20:54.607107  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.625322  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:54.625758  841202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:54.625821  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.643332  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.737805  841202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:54.742465  841202 start.go:128] duration metric: took 7.549168533s to createHost
	I0917 01:20:54.742494  841202 start.go:83] releasing machines lock for "kindnet-333616", held for 7.549346209s
	I0917 01:20:54.742570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.759991  841202 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:54.760051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.760083  841202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:54.760154  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.778915  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.779306  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.952563  841202 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:54.957470  841202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:55.101309  841202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:55.106493  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.131742  841202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:55.131831  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.164272  841202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 01:20:55.164303  841202 start.go:495] detecting cgroup driver to use...
	I0917 01:20:55.164352  841202 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:55.164430  841202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:55.182732  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:55.194856  841202 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:55.194918  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:55.209368  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:55.224908  841202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:55.294219  841202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:55.366744  841202 docker.go:234] disabling docker service ...
	I0917 01:20:55.366805  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:55.386004  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:55.398281  841202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:55.471097  841202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:55.620605  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:55.632936  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:55.650751  841202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:55.650813  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.665355  841202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:55.665449  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.677774  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.688724  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.700141  841202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:55.711135  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.722974  841202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.741236  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.752869  841202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:55.762991  841202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:55.772774  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:55.842833  841202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:55.939370  841202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:55.939456  841202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:55.943491  841202 start.go:563] Will wait 60s for crictl version
	I0917 01:20:55.943562  841202 ssh_runner.go:195] Run: which crictl
	I0917 01:20:55.947384  841202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:55.984137  841202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 01:20:55.984206  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.022652  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.062561  841202 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 01:20:56.063985  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:56.081880  841202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 01:20:56.086073  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.098482  841202 kubeadm.go:875] updating cluster {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:20:56.098622  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:56.098685  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.169870  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.169898  841202 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:20:56.169953  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.206753  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.206784  841202 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:20:56.206794  841202 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0917 01:20:56.206913  841202 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-333616 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 01:20:56.207001  841202 ssh_runner.go:195] Run: crio config
	I0917 01:20:56.253538  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:56.253567  841202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:20:56.253590  841202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-333616 NodeName:kindnet-333616 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:20:56.253716  841202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-333616"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:20:56.253775  841202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:20:56.264146  841202 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:20:56.264224  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:20:56.274749  841202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0917 01:20:56.293906  841202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:20:56.316487  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 01:20:56.336550  841202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:20:56.340325  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.352936  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:56.418882  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:20:56.445037  841202 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616 for IP: 192.168.103.2
	I0917 01:20:56.445069  841202 certs.go:194] generating shared ca certs ...
	I0917 01:20:56.445096  841202 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.445265  841202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 01:20:56.445328  841202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 01:20:56.445342  841202 certs.go:256] generating profile certs ...
	I0917 01:20:56.445433  841202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key
	I0917 01:20:56.445452  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt with IP's: []
	I0917 01:20:56.575658  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt ...
	I0917 01:20:56.575692  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt: {Name:mke4c01e2ad680ec95da34129972695bc352dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.575918  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key ...
	I0917 01:20:56.575935  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key: {Name:mk196e199bf8e509067e257fa5978cc4017a9515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.576063  841202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883
	I0917 01:20:56.576083  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 01:20:56.891743  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 ...
	I0917 01:20:56.891776  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883: {Name:mk080638a3e062c43555f3e1bbede660cca9c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.891955  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 ...
	I0917 01:20:56.891969  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883: {Name:mkbe71ad29db0d31be773639ab90fdd03d84b089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.892043  841202 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt
	I0917 01:20:56.892145  841202 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key
	I0917 01:20:56.892212  841202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key
	I0917 01:20:56.892228  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt with IP's: []
	W0917 01:20:55.172587  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:57.173997  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:59.673374  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:57.205489  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt ...
	I0917 01:20:57.205524  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt: {Name:mkf6b5ecd44d0faf20e6e53acc7eeebe333eca17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205728  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key ...
	I0917 01:20:57.205746  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key: {Name:mk2b3f753e527ada6b46c8fd672f3b210e243668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205983  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 01:20:57.206033  841202 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 01:20:57.206049  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:20:57.206079  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:20:57.206110  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:20:57.206143  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 01:20:57.206196  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:57.206849  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:20:57.236316  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:20:57.264039  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:20:57.290903  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:20:57.316649  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:20:57.343336  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:20:57.369426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:20:57.395757  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:20:57.422129  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 01:20:57.452169  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:20:57.479060  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 01:20:57.505045  841202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:20:57.524210  841202 ssh_runner.go:195] Run: openssl version
	I0917 01:20:57.530236  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:20:57.540421  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544062  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544118  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.551188  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:20:57.561515  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 01:20:57.572283  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576261  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576323  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.583692  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 01:20:57.593924  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 01:20:57.604001  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608154  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608211  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.615475  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:20:57.625656  841202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:20:57.629541  841202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:20:57.629606  841202 kubeadm.go:392] StartCluster: {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:57.629685  841202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:20:57.629748  841202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:20:57.668306  841202 cri.go:89] found id: ""
	I0917 01:20:57.668384  841202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:20:57.679315  841202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:20:57.689592  841202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 01:20:57.689666  841202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:20:57.699255  841202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:20:57.699272  841202 kubeadm.go:157] found existing configuration files:
	
	I0917 01:20:57.699327  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:20:57.708879  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:20:57.708950  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:20:57.718406  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:20:57.728172  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:20:57.728251  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:20:57.737991  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.748427  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:20:57.748487  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.757822  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:20:57.767640  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:20:57.767708  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:20:57.776934  841202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 01:20:57.849477  841202 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 01:20:57.909176  841202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:21:01.172820  832418 pod_ready.go:94] pod "coredns-66bc5c9577-qqxrk" is "Ready"
	I0917 01:21:01.172851  832418 pod_ready.go:86] duration metric: took 38.505527826s for pod "coredns-66bc5c9577-qqxrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.175617  832418 pod_ready.go:83] waiting for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.179752  832418 pod_ready.go:94] pod "etcd-embed-certs-748988" is "Ready"
	I0917 01:21:01.179779  832418 pod_ready.go:86] duration metric: took 4.135657ms for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.182426  832418 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.186899  832418 pod_ready.go:94] pod "kube-apiserver-embed-certs-748988" is "Ready"
	I0917 01:21:01.186928  832418 pod_ready.go:86] duration metric: took 4.474792ms for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.189100  832418 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.371319  832418 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748988" is "Ready"
	I0917 01:21:01.371352  832418 pod_ready.go:86] duration metric: took 182.22498ms for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.570958  832418 pod_ready.go:83] waiting for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.970376  832418 pod_ready.go:94] pod "kube-proxy-2bkdq" is "Ready"
	I0917 01:21:01.970432  832418 pod_ready.go:86] duration metric: took 399.444446ms for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.171077  832418 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570435  832418 pod_ready.go:94] pod "kube-scheduler-embed-certs-748988" is "Ready"
	I0917 01:21:02.570467  832418 pod_ready.go:86] duration metric: took 399.360883ms for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570484  832418 pod_ready.go:40] duration metric: took 39.908444834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:21:02.617522  832418 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:21:02.619899  832418 out.go:179] * Done! kubectl is now configured to use "embed-certs-748988" cluster and "default" namespace by default
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 
	I0917 01:21:02.051660  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:02.085236  819928 retry.go:31] will retry after 15.073168141s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	
	==> CRI-O <==
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900219900Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900375290Z" level=info msg="Node configuration value for hugetlb cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900412189Z" level=info msg="Node configuration value for pid cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900479004Z" level=info msg="Node configuration value for memoryswap cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900490617Z" level=info msg="Node configuration value for cgroup v2 is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.906797224Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913464400Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913750835Z" level=info msg="[graphdriver] using prior storage driver: overlay"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.914768261Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917639624Z" level=info msg="Conmon does support the --sync option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917673061Z" level=info msg="Conmon does support the --log-global-size-max option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919571823Z" level=info msg="Using seccomp default profile when unspecified: true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919593931Z" level=info msg="No seccomp profile specified, using the internal default"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919603651Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919611839Z" level=info msg="No blockio config file specified, blockio not configured"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919618928Z" level=info msg="RDT not available in the host system"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924637958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924675992Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.937133863Z" level=fatal msg="too many open files"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8444 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> kernel <==
	 01:21:09 up  4:03,  0 users,  load average: 2.84, 3.23, 2.38
	Linux default-k8s-diff-port-377743 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:08.928891  845168 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:08Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:08.966020  845168 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:08Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:09.006622  845168 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:09.052146  845168 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:09.095516  845168 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:09.132026  845168 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:09.166531  845168 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:09.202816  845168 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:09.235504  845168 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:09Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (299.637589ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:09.765750  845457 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "default-k8s-diff-port-377743" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (1.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-377743 image list --format=json
start_stop_delete_test.go:302: v1.34.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.12.1",
- 	"registry.k8s.io/etcd:3.6.4-0",
- 	"registry.k8s.io/kube-apiserver:v1.34.0",
- 	"registry.k8s.io/kube-controller-manager:v1.34.0",
- 	"registry.k8s.io/kube-proxy:v1.34.0",
- 	"registry.k8s.io/kube-scheduler:v1.34.0",
- 	"registry.k8s.io/pause:3.10.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-377743
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-377743:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	        "Created": "2025-09-17T01:18:46.928961651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 834842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T01:20:19.100007065Z",
	            "FinishedAt": "2025-09-17T01:20:18.156352163Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc-json.log",
	        "Name": "/default-k8s-diff-port-377743",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-377743:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-377743",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	                "LowerDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-377743",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-377743/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-377743",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "501c8109c3bc8c00897c7b54c7d2675ba4a3bb996e4f4f197def146bb8ff190a",
	            "SandboxKey": "/var/run/docker/netns/501c8109c3bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-377743": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:0c:94:b6:e5:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2391a23950fb5471e73c0e959464ddf40474359ad3a94730b27d02f587b2a08a",
	                    "EndpointID": "db62485899e89b194074d4c36c3a42a7db3e7cbeba5ca889e1cc809ec8289fa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-377743",
	                        "ce5cf21d301e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (298.11725ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:10.318713  845638 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-377743 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-333616 sudo systemctl cat kubelet --no-pager                                                                                     │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat docker --no-pager                                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/docker/daemon.json                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo docker system info                                                                                                   │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cri-dockerd --version                                                                                                │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat containerd --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/containerd/config.toml                                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo containerd config dump                                                                                               │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat crio --no-pager                                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo crio config                                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ delete  │ -p auto-333616                                                                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ start   │ -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-333616               │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ image   │ default-k8s-diff-port-377743 image list --format=json                                                                                    │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:20:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:20:46.991253  841202 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:46.991355  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991363  841202 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:46.991367  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991948  841202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:46.993103  841202 out.go:368] Setting JSON to false
	I0917 01:20:46.994427  841202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14590,"bootTime":1758057457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:46.994531  841202 start.go:140] virtualization: kvm guest
	I0917 01:20:46.996762  841202 out.go:179] * [kindnet-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:46.998033  841202 notify.go:220] Checking for updates...
	I0917 01:20:46.998040  841202 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:46.999333  841202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:47.000646  841202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:47.002223  841202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:47.003668  841202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:47.005002  841202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:47.006954  841202 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007104  841202 config.go:182] Loaded profile config "embed-certs-748988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007208  841202 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007331  841202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:47.034761  841202 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:47.034876  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.096866  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.086442486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.097016  841202 docker.go:318] overlay module found
	I0917 01:20:47.099127  841202 out.go:179] * Using the docker driver based on user configuration
	I0917 01:20:47.100598  841202 start.go:304] selected driver: docker
	I0917 01:20:47.100620  841202 start.go:918] validating driver "docker" against <nil>
	I0917 01:20:47.100634  841202 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:47.101213  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.157653  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.147017932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.157843  841202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 01:20:47.158047  841202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:47.159808  841202 out.go:179] * Using Docker driver with root privileges
	I0917 01:20:47.161165  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:47.161185  841202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 01:20:47.161271  841202 start.go:348] cluster config:
	{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInte
rval:1m0s}
	I0917 01:20:47.162725  841202 out.go:179] * Starting "kindnet-333616" primary control-plane node in "kindnet-333616" cluster
	I0917 01:20:47.164093  841202 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:47.165424  841202 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:47.166669  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.166713  841202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:47.166725  841202 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:47.166780  841202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:47.166823  841202 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:47.166834  841202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:47.166922  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:47.166937  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json: {Name:mkd38d1752014f4bab9dae52a7872fb8a5cc71fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:47.192914  841202 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:47.192938  841202 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:47.192970  841202 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:47.193004  841202 start.go:360] acquireMachinesLock for kindnet-333616: {Name:mkc24d8ed730ab1614498d5beb0270c845773667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:47.193133  841202 start.go:364] duration metric: took 104.991µs to acquireMachinesLock for "kindnet-333616"
	I0917 01:20:47.193181  841202 start.go:93] Provisioning new machine with config: &{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:20:47.193276  841202 start.go:125] createHost starting for "" (driver="docker")
	W0917 01:20:45.672555  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:47.672815  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:47.195051  841202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 01:20:47.195285  841202 start.go:159] libmachine.API.Create for "kindnet-333616" (driver="docker")
	I0917 01:20:47.195320  841202 client.go:168] LocalClient.Create starting
	I0917 01:20:47.195405  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 01:20:47.195446  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195462  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195517  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 01:20:47.195536  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195549  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195889  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 01:20:47.213519  841202 cli_runner.go:211] docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 01:20:47.213608  841202 network_create.go:284] running [docker network inspect kindnet-333616] to gather additional debugging logs...
	I0917 01:20:47.213640  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616
	W0917 01:20:47.231055  841202 cli_runner.go:211] docker network inspect kindnet-333616 returned with exit code 1
	I0917 01:20:47.231092  841202 network_create.go:287] error running [docker network inspect kindnet-333616]: docker network inspect kindnet-333616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-333616 not found
	I0917 01:20:47.231127  841202 network_create.go:289] output of [docker network inspect kindnet-333616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-333616 not found
	
	** /stderr **
	I0917 01:20:47.231231  841202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:47.249036  841202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
	I0917 01:20:47.249865  841202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7514a86599 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:c0:7e:cc:23:dc} reservation:<nil>}
	I0917 01:20:47.250378  841202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0cef36e94e8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:db:fd:7a:23:9f} reservation:<nil>}
	I0917 01:20:47.250966  841202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b9dd3e2b39a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:6a:d6:f0:80:2b} reservation:<nil>}
	I0917 01:20:47.251698  841202 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2391a23950fb IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:6b:a9:b6:cd:fd} reservation:<nil>}
	I0917 01:20:47.252201  841202 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2f0a55cba78d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:b8:6b:32:ae:3d} reservation:<nil>}
	I0917 01:20:47.253017  841202 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d90400}
	I0917 01:20:47.253041  841202 network_create.go:124] attempt to create docker network kindnet-333616 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 01:20:47.253107  841202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-333616 kindnet-333616
	I0917 01:20:47.313030  841202 network_create.go:108] docker network kindnet-333616 192.168.103.0/24 created
	I0917 01:20:47.313138  841202 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-333616" container
	I0917 01:20:47.313224  841202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 01:20:47.331726  841202 cli_runner.go:164] Run: docker volume create kindnet-333616 --label name.minikube.sigs.k8s.io=kindnet-333616 --label created_by.minikube.sigs.k8s.io=true
	I0917 01:20:47.350777  841202 oci.go:103] Successfully created a docker volume kindnet-333616
	I0917 01:20:47.350848  841202 cli_runner.go:164] Run: docker run --rm --name kindnet-333616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --entrypoint /usr/bin/test -v kindnet-333616:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 01:20:47.744926  841202 oci.go:107] Successfully prepared a docker volume kindnet-333616
	I0917 01:20:47.744972  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.744994  841202 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 01:20:47.745059  841202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	W0917 01:20:50.174804  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:52.673786  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:52.004993  841202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.25985666s)
	I0917 01:20:52.005028  841202 kic.go:203] duration metric: took 4.26003048s to extract preloaded images to volume ...
	W0917 01:20:52.005133  841202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 01:20:52.005164  841202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 01:20:52.005202  841202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 01:20:52.066749  841202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-333616 --name kindnet-333616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-333616 --network kindnet-333616 --ip 192.168.103.2 --volume kindnet-333616:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 01:20:52.362306  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Running}}
	I0917 01:20:52.383555  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.406449  841202 cli_runner.go:164] Run: docker exec kindnet-333616 stat /var/lib/dpkg/alternatives/iptables
	I0917 01:20:52.459697  841202 oci.go:144] the created container "kindnet-333616" has a running status.
	I0917 01:20:52.459737  841202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa...
	I0917 01:20:52.716503  841202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 01:20:52.742117  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.761330  841202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 01:20:52.761355  841202 kic_runner.go:114] Args: [docker exec --privileged kindnet-333616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 01:20:52.809335  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.831209  841202 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:52.831331  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:52.852889  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:52.853249  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:52.853269  841202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:52.992938  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:52.992969  841202 ubuntu.go:182] provisioning hostname "kindnet-333616"
	I0917 01:20:52.993051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.013532  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.013764  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.013778  841202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-333616 && echo "kindnet-333616" | sudo tee /etc/hostname
	I0917 01:20:53.166881  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:53.166973  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.187352  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.187631  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.187658  841202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-333616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-333616/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-333616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:53.332338  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:53.332408  841202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:53.332452  841202 ubuntu.go:190] setting up certificates
	I0917 01:20:53.332472  841202 provision.go:84] configureAuth start
	I0917 01:20:53.332570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:53.352359  841202 provision.go:143] copyHostCerts
	I0917 01:20:53.352466  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:53.352481  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:53.352553  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:53.352652  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:53.352661  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:53.352689  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:53.352759  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:53.352766  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:53.352789  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:53.352841  841202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kindnet-333616 san=[127.0.0.1 192.168.103.2 kindnet-333616 localhost minikube]
	I0917 01:20:53.973038  841202 provision.go:177] copyRemoteCerts
	I0917 01:20:53.973143  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:53.973182  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.991696  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.091426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 01:20:54.121737  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:54.150762  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:54.179160  841202 provision.go:87] duration metric: took 846.669603ms to configureAuth
	I0917 01:20:54.179187  841202 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:54.179345  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:54.179463  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.198684  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:54.198909  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:54.198925  841202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:54.444483  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:54.444511  841202 machine.go:96] duration metric: took 1.613270939s to provisionDockerMachine
	I0917 01:20:54.444522  841202 client.go:171] duration metric: took 7.249193748s to LocalClient.Create
	I0917 01:20:54.444542  841202 start.go:167] duration metric: took 7.249257601s to libmachine.API.Create "kindnet-333616"
	I0917 01:20:54.444554  841202 start.go:293] postStartSetup for "kindnet-333616" (driver="docker")
	I0917 01:20:54.444572  841202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:54.444641  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:54.444690  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.463166  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.563892  841202 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:54.567735  841202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:54.567765  841202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:54.567772  841202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:54.567782  841202 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:54.567795  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:54.567855  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:54.567966  841202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:54.568108  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:54.577885  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:54.606690  841202 start.go:296] duration metric: took 162.114963ms for postStartSetup
	I0917 01:20:54.607107  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.625322  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:54.625758  841202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:54.625821  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.643332  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.737805  841202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:54.742465  841202 start.go:128] duration metric: took 7.549168533s to createHost
	I0917 01:20:54.742494  841202 start.go:83] releasing machines lock for "kindnet-333616", held for 7.549346209s
	I0917 01:20:54.742570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.759991  841202 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:54.760051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.760083  841202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:54.760154  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.778915  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.779306  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.952563  841202 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:54.957470  841202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:55.101309  841202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:55.106493  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.131742  841202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:55.131831  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.164272  841202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 01:20:55.164303  841202 start.go:495] detecting cgroup driver to use...
	I0917 01:20:55.164352  841202 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:55.164430  841202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:55.182732  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:55.194856  841202 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:55.194918  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:55.209368  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:55.224908  841202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:55.294219  841202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:55.366744  841202 docker.go:234] disabling docker service ...
	I0917 01:20:55.366805  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:55.386004  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:55.398281  841202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:55.471097  841202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:55.620605  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:55.632936  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:55.650751  841202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:55.650813  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.665355  841202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:55.665449  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.677774  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.688724  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.700141  841202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:55.711135  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.722974  841202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.741236  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.752869  841202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:55.762991  841202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:55.772774  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:55.842833  841202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:55.939370  841202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:55.939456  841202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:55.943491  841202 start.go:563] Will wait 60s for crictl version
	I0917 01:20:55.943562  841202 ssh_runner.go:195] Run: which crictl
	I0917 01:20:55.947384  841202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:55.984137  841202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 01:20:55.984206  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.022652  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.062561  841202 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 01:20:56.063985  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:56.081880  841202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 01:20:56.086073  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.098482  841202 kubeadm.go:875] updating cluster {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:20:56.098622  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:56.098685  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.169870  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.169898  841202 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:20:56.169953  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.206753  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.206784  841202 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:20:56.206794  841202 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0917 01:20:56.206913  841202 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-333616 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 01:20:56.207001  841202 ssh_runner.go:195] Run: crio config
	I0917 01:20:56.253538  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:56.253567  841202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:20:56.253590  841202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-333616 NodeName:kindnet-333616 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:20:56.253716  841202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-333616"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:20:56.253775  841202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:20:56.264146  841202 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:20:56.264224  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:20:56.274749  841202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0917 01:20:56.293906  841202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:20:56.316487  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 01:20:56.336550  841202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:20:56.340325  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.352936  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:56.418882  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:20:56.445037  841202 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616 for IP: 192.168.103.2
	I0917 01:20:56.445069  841202 certs.go:194] generating shared ca certs ...
	I0917 01:20:56.445096  841202 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.445265  841202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 01:20:56.445328  841202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 01:20:56.445342  841202 certs.go:256] generating profile certs ...
	I0917 01:20:56.445433  841202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key
	I0917 01:20:56.445452  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt with IP's: []
	I0917 01:20:56.575658  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt ...
	I0917 01:20:56.575692  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt: {Name:mke4c01e2ad680ec95da34129972695bc352dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.575918  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key ...
	I0917 01:20:56.575935  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key: {Name:mk196e199bf8e509067e257fa5978cc4017a9515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.576063  841202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883
	I0917 01:20:56.576083  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 01:20:56.891743  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 ...
	I0917 01:20:56.891776  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883: {Name:mk080638a3e062c43555f3e1bbede660cca9c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.891955  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 ...
	I0917 01:20:56.891969  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883: {Name:mkbe71ad29db0d31be773639ab90fdd03d84b089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.892043  841202 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt
	I0917 01:20:56.892145  841202 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key
	I0917 01:20:56.892212  841202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key
	I0917 01:20:56.892228  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt with IP's: []
	W0917 01:20:55.172587  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:57.173997  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:59.673374  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:57.205489  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt ...
	I0917 01:20:57.205524  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt: {Name:mkf6b5ecd44d0faf20e6e53acc7eeebe333eca17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205728  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key ...
	I0917 01:20:57.205746  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key: {Name:mk2b3f753e527ada6b46c8fd672f3b210e243668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205983  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 01:20:57.206033  841202 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 01:20:57.206049  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:20:57.206079  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:20:57.206110  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:20:57.206143  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 01:20:57.206196  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:57.206849  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:20:57.236316  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:20:57.264039  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:20:57.290903  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:20:57.316649  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:20:57.343336  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:20:57.369426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:20:57.395757  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:20:57.422129  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 01:20:57.452169  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:20:57.479060  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 01:20:57.505045  841202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:20:57.524210  841202 ssh_runner.go:195] Run: openssl version
	I0917 01:20:57.530236  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:20:57.540421  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544062  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544118  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.551188  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:20:57.561515  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 01:20:57.572283  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576261  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576323  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.583692  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 01:20:57.593924  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 01:20:57.604001  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608154  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608211  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.615475  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:20:57.625656  841202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:20:57.629541  841202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:20:57.629606  841202 kubeadm.go:392] StartCluster: {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:57.629685  841202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:20:57.629748  841202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:20:57.668306  841202 cri.go:89] found id: ""
	I0917 01:20:57.668384  841202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:20:57.679315  841202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:20:57.689592  841202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 01:20:57.689666  841202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:20:57.699255  841202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:20:57.699272  841202 kubeadm.go:157] found existing configuration files:
	
	I0917 01:20:57.699327  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:20:57.708879  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:20:57.708950  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:20:57.718406  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:20:57.728172  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:20:57.728251  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:20:57.737991  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.748427  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:20:57.748487  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.757822  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:20:57.767640  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:20:57.767708  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:20:57.776934  841202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 01:20:57.849477  841202 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 01:20:57.909176  841202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:21:01.172820  832418 pod_ready.go:94] pod "coredns-66bc5c9577-qqxrk" is "Ready"
	I0917 01:21:01.172851  832418 pod_ready.go:86] duration metric: took 38.505527826s for pod "coredns-66bc5c9577-qqxrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.175617  832418 pod_ready.go:83] waiting for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.179752  832418 pod_ready.go:94] pod "etcd-embed-certs-748988" is "Ready"
	I0917 01:21:01.179779  832418 pod_ready.go:86] duration metric: took 4.135657ms for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.182426  832418 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.186899  832418 pod_ready.go:94] pod "kube-apiserver-embed-certs-748988" is "Ready"
	I0917 01:21:01.186928  832418 pod_ready.go:86] duration metric: took 4.474792ms for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.189100  832418 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.371319  832418 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748988" is "Ready"
	I0917 01:21:01.371352  832418 pod_ready.go:86] duration metric: took 182.22498ms for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.570958  832418 pod_ready.go:83] waiting for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.970376  832418 pod_ready.go:94] pod "kube-proxy-2bkdq" is "Ready"
	I0917 01:21:01.970432  832418 pod_ready.go:86] duration metric: took 399.444446ms for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.171077  832418 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570435  832418 pod_ready.go:94] pod "kube-scheduler-embed-certs-748988" is "Ready"
	I0917 01:21:02.570467  832418 pod_ready.go:86] duration metric: took 399.360883ms for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570484  832418 pod_ready.go:40] duration metric: took 39.908444834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:21:02.617522  832418 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:21:02.619899  832418 out.go:179] * Done! kubectl is now configured to use "embed-certs-748988" cluster and "default" namespace by default
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 
	I0917 01:21:02.051660  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:02.085236  819928 retry.go:31] will retry after 15.073168141s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:08.749313  841202 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 01:21:08.749411  841202 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 01:21:08.749519  841202 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 01:21:08.749589  841202 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 01:21:08.749650  841202 kubeadm.go:310] OS: Linux
	I0917 01:21:08.749713  841202 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 01:21:08.749779  841202 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 01:21:08.749841  841202 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 01:21:08.749902  841202 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 01:21:08.749959  841202 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 01:21:08.750017  841202 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 01:21:08.750085  841202 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 01:21:08.750143  841202 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 01:21:08.750240  841202 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 01:21:08.750408  841202 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 01:21:08.750528  841202 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 01:21:08.750612  841202 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 01:21:08.752776  841202 out.go:252]   - Generating certificates and keys ...
	I0917 01:21:08.752899  841202 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 01:21:08.752994  841202 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 01:21:08.753166  841202 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 01:21:08.753271  841202 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 01:21:08.753363  841202 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 01:21:08.753458  841202 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 01:21:08.753543  841202 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 01:21:08.753685  841202 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.753763  841202 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 01:21:08.753955  841202 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.754090  841202 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 01:21:08.754192  841202 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 01:21:08.754257  841202 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 01:21:08.754342  841202 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 01:21:08.754430  841202 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 01:21:08.754478  841202 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 01:21:08.754527  841202 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 01:21:08.754580  841202 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 01:21:08.754625  841202 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 01:21:08.754700  841202 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 01:21:08.754755  841202 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 01:21:08.756322  841202 out.go:252]   - Booting up control plane ...
	I0917 01:21:08.756479  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 01:21:08.756610  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 01:21:08.756707  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 01:21:08.756865  841202 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 01:21:08.756981  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 01:21:08.757139  841202 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 01:21:08.757242  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 01:21:08.757292  841202 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 01:21:08.757475  841202 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 01:21:08.757598  841202 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 01:21:08.757667  841202 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.884368ms
	I0917 01:21:08.757780  841202 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 01:21:08.757913  841202 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0917 01:21:08.758047  841202 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 01:21:08.758174  841202 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 01:21:08.758291  841202 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.005156484s
	I0917 01:21:08.758398  841202 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.505889566s
	I0917 01:21:08.758508  841202 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501611145s
	I0917 01:21:08.758646  841202 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 01:21:08.758798  841202 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 01:21:08.758886  841202 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 01:21:08.759100  841202 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-333616 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 01:21:08.759198  841202 kubeadm.go:310] [bootstrap-token] Using token: 162lgr.l6wrgxxcju3qv1m6
	I0917 01:21:08.760426  841202 out.go:252]   - Configuring RBAC rules ...
	I0917 01:21:08.760541  841202 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 01:21:08.760645  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 01:21:08.760852  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 01:21:08.761023  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 01:21:08.761194  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 01:21:08.761327  841202 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 01:21:08.761559  841202 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 01:21:08.761636  841202 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 01:21:08.761697  841202 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 01:21:08.761708  841202 kubeadm.go:310] 
	I0917 01:21:08.761785  841202 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 01:21:08.761796  841202 kubeadm.go:310] 
	I0917 01:21:08.761916  841202 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 01:21:08.761932  841202 kubeadm.go:310] 
	I0917 01:21:08.761974  841202 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 01:21:08.762071  841202 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 01:21:08.762135  841202 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 01:21:08.762145  841202 kubeadm.go:310] 
	I0917 01:21:08.762215  841202 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 01:21:08.762222  841202 kubeadm.go:310] 
	I0917 01:21:08.762262  841202 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 01:21:08.762269  841202 kubeadm.go:310] 
	I0917 01:21:08.762319  841202 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 01:21:08.762431  841202 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 01:21:08.762533  841202 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 01:21:08.762551  841202 kubeadm.go:310] 
	I0917 01:21:08.762669  841202 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 01:21:08.762785  841202 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 01:21:08.762797  841202 kubeadm.go:310] 
	I0917 01:21:08.762899  841202 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763036  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 01:21:08.763072  841202 kubeadm.go:310] 	--control-plane 
	I0917 01:21:08.763080  841202 kubeadm.go:310] 
	I0917 01:21:08.763190  841202 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 01:21:08.763210  841202 kubeadm.go:310] 
	I0917 01:21:08.763278  841202 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763415  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 01:21:08.763437  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:21:08.766700  841202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900219900Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900375290Z" level=info msg="Node configuration value for hugetlb cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900412189Z" level=info msg="Node configuration value for pid cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900479004Z" level=info msg="Node configuration value for memoryswap cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900490617Z" level=info msg="Node configuration value for cgroup v2 is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.906797224Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913464400Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913750835Z" level=info msg="[graphdriver] using prior storage driver: overlay"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.914768261Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917639624Z" level=info msg="Conmon does support the --sync option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917673061Z" level=info msg="Conmon does support the --log-global-size-max option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919571823Z" level=info msg="Using seccomp default profile when unspecified: true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919593931Z" level=info msg="No seccomp profile specified, using the internal default"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919603651Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919611839Z" level=info msg="No blockio config file specified, blockio not configured"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919618928Z" level=info msg="RDT not available in the host system"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924637958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924675992Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.937133863Z" level=fatal msg="too many open files"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8444 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> kernel <==
	 01:21:11 up  4:03,  0 users,  load average: 2.69, 3.20, 2.37
	Linux default-k8s-diff-port-377743 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:10.636551  845774 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.670101  845774 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.704135  845774 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.737495  845774 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.770260  845774 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.803094  845774 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.836918  845774 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.868414  845774 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:10.901538  845774 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (306.442311ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:11.428522  846039 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "default-k8s-diff-port-377743" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-377743 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-377743 --alsologtostderr -v=1: exit status 80 (1.702882905s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-377743 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:21:11.491001  846151 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:21:11.491141  846151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:21:11.491151  846151 out.go:374] Setting ErrFile to fd 2...
	I0917 01:21:11.491159  846151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:21:11.491485  846151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:21:11.491788  846151 out.go:368] Setting JSON to false
	I0917 01:21:11.491827  846151 mustload.go:65] Loading cluster: default-k8s-diff-port-377743
	I0917 01:21:11.492200  846151 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:21:11.492626  846151 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-377743 --format={{.State.Status}}
	I0917 01:21:11.511157  846151 host.go:66] Checking if "default-k8s-diff-port-377743" exists ...
	I0917 01:21:11.511444  846151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:21:11.572718  846151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-09-17 01:21:11.560668175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:21:11.573481  846151 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(
bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0/minikube-v1.37.0-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(boo
l=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-377743 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0917 01:21:11.575496  846151 out.go:179] * Pausing node default-k8s-diff-port-377743 ... 
	I0917 01:21:11.577371  846151 host.go:66] Checking if "default-k8s-diff-port-377743" exists ...
	I0917 01:21:11.577764  846151 ssh_runner.go:195] Run: systemctl --version
	I0917 01:21:11.577821  846151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-377743
	I0917 01:21:11.597230  846151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/default-k8s-diff-port-377743/id_rsa Username:docker}
	I0917 01:21:11.691686  846151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:21:11.704821  846151 pause.go:51] kubelet running: false
	I0917 01:21:11.704906  846151 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 01:21:11.775720  846151 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 01:21:11.775795  846151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 01:21:11.883826  846151 retry.go:31] will retry after 145.525643ms: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:11Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:11Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:11Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:11Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:12.030179  846151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:21:12.043380  846151 pause.go:51] kubelet running: false
	I0917 01:21:12.043469  846151 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 01:21:12.111624  846151 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 01:21:12.111716  846151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 01:21:12.221923  846151 retry.go:31] will retry after 219.088983ms: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:12.441347  846151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:21:12.455136  846151 pause.go:51] kubelet running: false
	I0917 01:21:12.455207  846151 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 01:21:12.528086  846151 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 01:21:12.528211  846151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 01:21:12.639001  846151 retry.go:31] will retry after 293.503549ms: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:12Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:12.933584  846151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:21:12.946279  846151 pause.go:51] kubelet running: false
	I0917 01:21:12.946347  846151 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0917 01:21:13.018686  846151 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0917 01:21:13.018778  846151 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0917 01:21:13.135691  846151 out.go:203] 
	W0917 01:21:13.136891  846151 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	X Exiting due to GUEST_PAUSE: Pause: list running: crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:13.136915  846151 out.go:285] * 
	* 
	W0917 01:21:13.141623  846151 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:13.142914  846151 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-377743 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-377743
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-377743:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	        "Created": "2025-09-17T01:18:46.928961651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 834842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T01:20:19.100007065Z",
	            "FinishedAt": "2025-09-17T01:20:18.156352163Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc-json.log",
	        "Name": "/default-k8s-diff-port-377743",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-377743:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-377743",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	                "LowerDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-377743",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-377743/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-377743",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "501c8109c3bc8c00897c7b54c7d2675ba4a3bb996e4f4f197def146bb8ff190a",
	            "SandboxKey": "/var/run/docker/netns/501c8109c3bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-377743": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:0c:94:b6:e5:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2391a23950fb5471e73c0e959464ddf40474359ad3a94730b27d02f587b2a08a",
	                    "EndpointID": "db62485899e89b194074d4c36c3a42a7db3e7cbeba5ca889e1cc809ec8289fa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-377743",
	                        "ce5cf21d301e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (391.634643ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:13.543330  846547 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-377743 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-333616 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat docker --no-pager                                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/docker/daemon.json                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo docker system info                                                                                                   │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cri-dockerd --version                                                                                                │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat containerd --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/containerd/config.toml                                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo containerd config dump                                                                                               │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat crio --no-pager                                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo crio config                                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ delete  │ -p auto-333616                                                                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ start   │ -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-333616               │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ image   │ default-k8s-diff-port-377743 image list --format=json                                                                                    │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	│ pause   │ -p default-k8s-diff-port-377743 --alsologtostderr -v=1                                                                                   │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:20:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:20:46.991253  841202 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:46.991355  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991363  841202 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:46.991367  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991948  841202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:46.993103  841202 out.go:368] Setting JSON to false
	I0917 01:20:46.994427  841202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14590,"bootTime":1758057457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:46.994531  841202 start.go:140] virtualization: kvm guest
	I0917 01:20:46.996762  841202 out.go:179] * [kindnet-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:46.998033  841202 notify.go:220] Checking for updates...
	I0917 01:20:46.998040  841202 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:46.999333  841202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:47.000646  841202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:47.002223  841202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:47.003668  841202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:47.005002  841202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:47.006954  841202 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007104  841202 config.go:182] Loaded profile config "embed-certs-748988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007208  841202 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007331  841202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:47.034761  841202 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:47.034876  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.096866  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.086442486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.097016  841202 docker.go:318] overlay module found
	I0917 01:20:47.099127  841202 out.go:179] * Using the docker driver based on user configuration
	I0917 01:20:47.100598  841202 start.go:304] selected driver: docker
	I0917 01:20:47.100620  841202 start.go:918] validating driver "docker" against <nil>
	I0917 01:20:47.100634  841202 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:47.101213  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.157653  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.147017932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.157843  841202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 01:20:47.158047  841202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:47.159808  841202 out.go:179] * Using Docker driver with root privileges
	I0917 01:20:47.161165  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:47.161185  841202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 01:20:47.161271  841202 start.go:348] cluster config:
	{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInte
rval:1m0s}
	I0917 01:20:47.162725  841202 out.go:179] * Starting "kindnet-333616" primary control-plane node in "kindnet-333616" cluster
	I0917 01:20:47.164093  841202 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:47.165424  841202 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:47.166669  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.166713  841202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:47.166725  841202 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:47.166780  841202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:47.166823  841202 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:47.166834  841202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:47.166922  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:47.166937  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json: {Name:mkd38d1752014f4bab9dae52a7872fb8a5cc71fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:47.192914  841202 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:47.192938  841202 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:47.192970  841202 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:47.193004  841202 start.go:360] acquireMachinesLock for kindnet-333616: {Name:mkc24d8ed730ab1614498d5beb0270c845773667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:47.193133  841202 start.go:364] duration metric: took 104.991µs to acquireMachinesLock for "kindnet-333616"
	I0917 01:20:47.193181  841202 start.go:93] Provisioning new machine with config: &{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:20:47.193276  841202 start.go:125] createHost starting for "" (driver="docker")
	W0917 01:20:45.672555  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:47.672815  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:47.195051  841202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 01:20:47.195285  841202 start.go:159] libmachine.API.Create for "kindnet-333616" (driver="docker")
	I0917 01:20:47.195320  841202 client.go:168] LocalClient.Create starting
	I0917 01:20:47.195405  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 01:20:47.195446  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195462  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195517  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 01:20:47.195536  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195549  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195889  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 01:20:47.213519  841202 cli_runner.go:211] docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 01:20:47.213608  841202 network_create.go:284] running [docker network inspect kindnet-333616] to gather additional debugging logs...
	I0917 01:20:47.213640  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616
	W0917 01:20:47.231055  841202 cli_runner.go:211] docker network inspect kindnet-333616 returned with exit code 1
	I0917 01:20:47.231092  841202 network_create.go:287] error running [docker network inspect kindnet-333616]: docker network inspect kindnet-333616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-333616 not found
	I0917 01:20:47.231127  841202 network_create.go:289] output of [docker network inspect kindnet-333616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-333616 not found
	
	** /stderr **
	I0917 01:20:47.231231  841202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:47.249036  841202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
	I0917 01:20:47.249865  841202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7514a86599 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:c0:7e:cc:23:dc} reservation:<nil>}
	I0917 01:20:47.250378  841202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0cef36e94e8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:db:fd:7a:23:9f} reservation:<nil>}
	I0917 01:20:47.250966  841202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b9dd3e2b39a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:6a:d6:f0:80:2b} reservation:<nil>}
	I0917 01:20:47.251698  841202 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2391a23950fb IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:6b:a9:b6:cd:fd} reservation:<nil>}
	I0917 01:20:47.252201  841202 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2f0a55cba78d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:b8:6b:32:ae:3d} reservation:<nil>}
	I0917 01:20:47.253017  841202 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d90400}
	I0917 01:20:47.253041  841202 network_create.go:124] attempt to create docker network kindnet-333616 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 01:20:47.253107  841202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-333616 kindnet-333616
	I0917 01:20:47.313030  841202 network_create.go:108] docker network kindnet-333616 192.168.103.0/24 created
	I0917 01:20:47.313138  841202 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-333616" container
	I0917 01:20:47.313224  841202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 01:20:47.331726  841202 cli_runner.go:164] Run: docker volume create kindnet-333616 --label name.minikube.sigs.k8s.io=kindnet-333616 --label created_by.minikube.sigs.k8s.io=true
	I0917 01:20:47.350777  841202 oci.go:103] Successfully created a docker volume kindnet-333616
	I0917 01:20:47.350848  841202 cli_runner.go:164] Run: docker run --rm --name kindnet-333616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --entrypoint /usr/bin/test -v kindnet-333616:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 01:20:47.744926  841202 oci.go:107] Successfully prepared a docker volume kindnet-333616
	I0917 01:20:47.744972  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.744994  841202 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 01:20:47.745059  841202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	W0917 01:20:50.174804  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:52.673786  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:52.004993  841202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.25985666s)
	I0917 01:20:52.005028  841202 kic.go:203] duration metric: took 4.26003048s to extract preloaded images to volume ...
	W0917 01:20:52.005133  841202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 01:20:52.005164  841202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 01:20:52.005202  841202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 01:20:52.066749  841202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-333616 --name kindnet-333616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-333616 --network kindnet-333616 --ip 192.168.103.2 --volume kindnet-333616:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 01:20:52.362306  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Running}}
	I0917 01:20:52.383555  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.406449  841202 cli_runner.go:164] Run: docker exec kindnet-333616 stat /var/lib/dpkg/alternatives/iptables
	I0917 01:20:52.459697  841202 oci.go:144] the created container "kindnet-333616" has a running status.
	I0917 01:20:52.459737  841202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa...
	I0917 01:20:52.716503  841202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 01:20:52.742117  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.761330  841202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 01:20:52.761355  841202 kic_runner.go:114] Args: [docker exec --privileged kindnet-333616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 01:20:52.809335  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.831209  841202 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:52.831331  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:52.852889  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:52.853249  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:52.853269  841202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:52.992938  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:52.992969  841202 ubuntu.go:182] provisioning hostname "kindnet-333616"
	I0917 01:20:52.993051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.013532  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.013764  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.013778  841202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-333616 && echo "kindnet-333616" | sudo tee /etc/hostname
	I0917 01:20:53.166881  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:53.166973  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.187352  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.187631  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.187658  841202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-333616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-333616/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-333616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:53.332338  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:53.332408  841202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:53.332452  841202 ubuntu.go:190] setting up certificates
	I0917 01:20:53.332472  841202 provision.go:84] configureAuth start
	I0917 01:20:53.332570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:53.352359  841202 provision.go:143] copyHostCerts
	I0917 01:20:53.352466  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:53.352481  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:53.352553  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:53.352652  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:53.352661  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:53.352689  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:53.352759  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:53.352766  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:53.352789  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:53.352841  841202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kindnet-333616 san=[127.0.0.1 192.168.103.2 kindnet-333616 localhost minikube]
	I0917 01:20:53.973038  841202 provision.go:177] copyRemoteCerts
	I0917 01:20:53.973143  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:53.973182  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.991696  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.091426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 01:20:54.121737  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:54.150762  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:54.179160  841202 provision.go:87] duration metric: took 846.669603ms to configureAuth
	I0917 01:20:54.179187  841202 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:54.179345  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:54.179463  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.198684  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:54.198909  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:54.198925  841202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:54.444483  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:54.444511  841202 machine.go:96] duration metric: took 1.613270939s to provisionDockerMachine
	I0917 01:20:54.444522  841202 client.go:171] duration metric: took 7.249193748s to LocalClient.Create
	I0917 01:20:54.444542  841202 start.go:167] duration metric: took 7.249257601s to libmachine.API.Create "kindnet-333616"
	I0917 01:20:54.444554  841202 start.go:293] postStartSetup for "kindnet-333616" (driver="docker")
	I0917 01:20:54.444572  841202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:54.444641  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:54.444690  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.463166  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.563892  841202 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:54.567735  841202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:54.567765  841202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:54.567772  841202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:54.567782  841202 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:54.567795  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:54.567855  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:54.567966  841202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:54.568108  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:54.577885  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:54.606690  841202 start.go:296] duration metric: took 162.114963ms for postStartSetup
	I0917 01:20:54.607107  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.625322  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:54.625758  841202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:54.625821  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.643332  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.737805  841202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:54.742465  841202 start.go:128] duration metric: took 7.549168533s to createHost
	I0917 01:20:54.742494  841202 start.go:83] releasing machines lock for "kindnet-333616", held for 7.549346209s
	I0917 01:20:54.742570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.759991  841202 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:54.760051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.760083  841202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:54.760154  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.778915  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.779306  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.952563  841202 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:54.957470  841202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:55.101309  841202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:55.106493  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.131742  841202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:55.131831  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.164272  841202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 01:20:55.164303  841202 start.go:495] detecting cgroup driver to use...
	I0917 01:20:55.164352  841202 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:55.164430  841202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:55.182732  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:55.194856  841202 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:55.194918  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:55.209368  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:55.224908  841202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:55.294219  841202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:55.366744  841202 docker.go:234] disabling docker service ...
	I0917 01:20:55.366805  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:55.386004  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:55.398281  841202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:55.471097  841202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:55.620605  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:55.632936  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:55.650751  841202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:55.650813  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.665355  841202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:55.665449  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.677774  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.688724  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.700141  841202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:55.711135  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.722974  841202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.741236  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.752869  841202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:55.762991  841202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:55.772774  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:55.842833  841202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:55.939370  841202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:55.939456  841202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:55.943491  841202 start.go:563] Will wait 60s for crictl version
	I0917 01:20:55.943562  841202 ssh_runner.go:195] Run: which crictl
	I0917 01:20:55.947384  841202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:55.984137  841202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 01:20:55.984206  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.022652  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.062561  841202 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 01:20:56.063985  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:56.081880  841202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 01:20:56.086073  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.098482  841202 kubeadm.go:875] updating cluster {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:20:56.098622  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:56.098685  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.169870  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.169898  841202 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:20:56.169953  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.206753  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.206784  841202 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:20:56.206794  841202 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0917 01:20:56.206913  841202 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-333616 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 01:20:56.207001  841202 ssh_runner.go:195] Run: crio config
	I0917 01:20:56.253538  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:56.253567  841202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:20:56.253590  841202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-333616 NodeName:kindnet-333616 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:20:56.253716  841202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-333616"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:20:56.253775  841202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:20:56.264146  841202 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:20:56.264224  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:20:56.274749  841202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0917 01:20:56.293906  841202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:20:56.316487  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 01:20:56.336550  841202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:20:56.340325  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.352936  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:56.418882  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:20:56.445037  841202 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616 for IP: 192.168.103.2
	I0917 01:20:56.445069  841202 certs.go:194] generating shared ca certs ...
	I0917 01:20:56.445096  841202 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.445265  841202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 01:20:56.445328  841202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 01:20:56.445342  841202 certs.go:256] generating profile certs ...
	I0917 01:20:56.445433  841202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key
	I0917 01:20:56.445452  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt with IP's: []
	I0917 01:20:56.575658  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt ...
	I0917 01:20:56.575692  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt: {Name:mke4c01e2ad680ec95da34129972695bc352dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.575918  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key ...
	I0917 01:20:56.575935  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key: {Name:mk196e199bf8e509067e257fa5978cc4017a9515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.576063  841202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883
	I0917 01:20:56.576083  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 01:20:56.891743  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 ...
	I0917 01:20:56.891776  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883: {Name:mk080638a3e062c43555f3e1bbede660cca9c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.891955  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 ...
	I0917 01:20:56.891969  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883: {Name:mkbe71ad29db0d31be773639ab90fdd03d84b089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.892043  841202 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt
	I0917 01:20:56.892145  841202 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key
	I0917 01:20:56.892212  841202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key
	I0917 01:20:56.892228  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt with IP's: []
	W0917 01:20:55.172587  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:57.173997  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:59.673374  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:57.205489  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt ...
	I0917 01:20:57.205524  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt: {Name:mkf6b5ecd44d0faf20e6e53acc7eeebe333eca17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205728  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key ...
	I0917 01:20:57.205746  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key: {Name:mk2b3f753e527ada6b46c8fd672f3b210e243668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205983  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 01:20:57.206033  841202 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 01:20:57.206049  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:20:57.206079  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:20:57.206110  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:20:57.206143  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 01:20:57.206196  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:57.206849  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:20:57.236316  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:20:57.264039  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:20:57.290903  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:20:57.316649  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:20:57.343336  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:20:57.369426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:20:57.395757  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:20:57.422129  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 01:20:57.452169  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:20:57.479060  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 01:20:57.505045  841202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:20:57.524210  841202 ssh_runner.go:195] Run: openssl version
	I0917 01:20:57.530236  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:20:57.540421  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544062  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544118  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.551188  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:20:57.561515  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 01:20:57.572283  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576261  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576323  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.583692  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 01:20:57.593924  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 01:20:57.604001  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608154  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608211  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.615475  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:20:57.625656  841202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:20:57.629541  841202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:20:57.629606  841202 kubeadm.go:392] StartCluster: {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:57.629685  841202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:20:57.629748  841202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:20:57.668306  841202 cri.go:89] found id: ""
	I0917 01:20:57.668384  841202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:20:57.679315  841202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:20:57.689592  841202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 01:20:57.689666  841202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:20:57.699255  841202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:20:57.699272  841202 kubeadm.go:157] found existing configuration files:
	
	I0917 01:20:57.699327  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:20:57.708879  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:20:57.708950  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:20:57.718406  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:20:57.728172  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:20:57.728251  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:20:57.737991  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.748427  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:20:57.748487  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.757822  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:20:57.767640  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:20:57.767708  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:20:57.776934  841202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 01:20:57.849477  841202 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 01:20:57.909176  841202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:21:01.172820  832418 pod_ready.go:94] pod "coredns-66bc5c9577-qqxrk" is "Ready"
	I0917 01:21:01.172851  832418 pod_ready.go:86] duration metric: took 38.505527826s for pod "coredns-66bc5c9577-qqxrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.175617  832418 pod_ready.go:83] waiting for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.179752  832418 pod_ready.go:94] pod "etcd-embed-certs-748988" is "Ready"
	I0917 01:21:01.179779  832418 pod_ready.go:86] duration metric: took 4.135657ms for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.182426  832418 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.186899  832418 pod_ready.go:94] pod "kube-apiserver-embed-certs-748988" is "Ready"
	I0917 01:21:01.186928  832418 pod_ready.go:86] duration metric: took 4.474792ms for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.189100  832418 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.371319  832418 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748988" is "Ready"
	I0917 01:21:01.371352  832418 pod_ready.go:86] duration metric: took 182.22498ms for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.570958  832418 pod_ready.go:83] waiting for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.970376  832418 pod_ready.go:94] pod "kube-proxy-2bkdq" is "Ready"
	I0917 01:21:01.970432  832418 pod_ready.go:86] duration metric: took 399.444446ms for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.171077  832418 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570435  832418 pod_ready.go:94] pod "kube-scheduler-embed-certs-748988" is "Ready"
	I0917 01:21:02.570467  832418 pod_ready.go:86] duration metric: took 399.360883ms for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570484  832418 pod_ready.go:40] duration metric: took 39.908444834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:21:02.617522  832418 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:21:02.619899  832418 out.go:179] * Done! kubectl is now configured to use "embed-certs-748988" cluster and "default" namespace by default
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 
	I0917 01:21:02.051660  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:02.085236  819928 retry.go:31] will retry after 15.073168141s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:08.749313  841202 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 01:21:08.749411  841202 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 01:21:08.749519  841202 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 01:21:08.749589  841202 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 01:21:08.749650  841202 kubeadm.go:310] OS: Linux
	I0917 01:21:08.749713  841202 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 01:21:08.749779  841202 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 01:21:08.749841  841202 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 01:21:08.749902  841202 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 01:21:08.749959  841202 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 01:21:08.750017  841202 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 01:21:08.750085  841202 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 01:21:08.750143  841202 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 01:21:08.750240  841202 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 01:21:08.750408  841202 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 01:21:08.750528  841202 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 01:21:08.750612  841202 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 01:21:08.752776  841202 out.go:252]   - Generating certificates and keys ...
	I0917 01:21:08.752899  841202 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 01:21:08.752994  841202 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 01:21:08.753166  841202 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 01:21:08.753271  841202 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 01:21:08.753363  841202 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 01:21:08.753458  841202 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 01:21:08.753543  841202 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 01:21:08.753685  841202 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.753763  841202 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 01:21:08.753955  841202 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.754090  841202 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 01:21:08.754192  841202 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 01:21:08.754257  841202 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 01:21:08.754342  841202 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 01:21:08.754430  841202 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 01:21:08.754478  841202 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 01:21:08.754527  841202 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 01:21:08.754580  841202 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 01:21:08.754625  841202 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 01:21:08.754700  841202 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 01:21:08.754755  841202 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 01:21:08.756322  841202 out.go:252]   - Booting up control plane ...
	I0917 01:21:08.756479  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 01:21:08.756610  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 01:21:08.756707  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 01:21:08.756865  841202 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 01:21:08.756981  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 01:21:08.757139  841202 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 01:21:08.757242  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 01:21:08.757292  841202 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 01:21:08.757475  841202 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 01:21:08.757598  841202 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 01:21:08.757667  841202 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.884368ms
	I0917 01:21:08.757780  841202 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 01:21:08.757913  841202 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0917 01:21:08.758047  841202 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 01:21:08.758174  841202 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 01:21:08.758291  841202 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.005156484s
	I0917 01:21:08.758398  841202 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.505889566s
	I0917 01:21:08.758508  841202 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501611145s
	I0917 01:21:08.758646  841202 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 01:21:08.758798  841202 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 01:21:08.758886  841202 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 01:21:08.759100  841202 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-333616 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 01:21:08.759198  841202 kubeadm.go:310] [bootstrap-token] Using token: 162lgr.l6wrgxxcju3qv1m6
	I0917 01:21:08.760426  841202 out.go:252]   - Configuring RBAC rules ...
	I0917 01:21:08.760541  841202 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 01:21:08.760645  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 01:21:08.760852  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 01:21:08.761023  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 01:21:08.761194  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 01:21:08.761327  841202 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 01:21:08.761559  841202 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 01:21:08.761636  841202 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 01:21:08.761697  841202 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 01:21:08.761708  841202 kubeadm.go:310] 
	I0917 01:21:08.761785  841202 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 01:21:08.761796  841202 kubeadm.go:310] 
	I0917 01:21:08.761916  841202 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 01:21:08.761932  841202 kubeadm.go:310] 
	I0917 01:21:08.761974  841202 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 01:21:08.762071  841202 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 01:21:08.762135  841202 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 01:21:08.762145  841202 kubeadm.go:310] 
	I0917 01:21:08.762215  841202 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 01:21:08.762222  841202 kubeadm.go:310] 
	I0917 01:21:08.762262  841202 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 01:21:08.762269  841202 kubeadm.go:310] 
	I0917 01:21:08.762319  841202 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 01:21:08.762431  841202 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 01:21:08.762533  841202 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 01:21:08.762551  841202 kubeadm.go:310] 
	I0917 01:21:08.762669  841202 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 01:21:08.762785  841202 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 01:21:08.762797  841202 kubeadm.go:310] 
	I0917 01:21:08.762899  841202 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763036  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 01:21:08.763072  841202 kubeadm.go:310] 	--control-plane 
	I0917 01:21:08.763080  841202 kubeadm.go:310] 
	I0917 01:21:08.763190  841202 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 01:21:08.763210  841202 kubeadm.go:310] 
	I0917 01:21:08.763278  841202 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763415  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 01:21:08.763437  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:21:08.766700  841202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 01:21:08.767858  841202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 01:21:08.773343  841202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 01:21:08.773364  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 01:21:08.793795  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 01:21:09.025565  841202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 01:21:09.025804  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:09.025927  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-333616 minikube.k8s.io/updated_at=2025_09_17T01_21_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=kindnet-333616 minikube.k8s.io/primary=true
	I0917 01:21:09.125386  841202 ops.go:34] apiserver oom_adj: -16
	I0917 01:21:09.125519  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:09.626138  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:10.126613  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:10.626037  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:11.126442  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:11.626219  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:12.125827  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:12.626205  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:13.126607  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:13.209490  841202 kubeadm.go:1105] duration metric: took 4.183732835s to wait for elevateKubeSystemPrivileges
	I0917 01:21:13.209537  841202 kubeadm.go:394] duration metric: took 15.579926785s to StartCluster
	I0917 01:21:13.209560  841202 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:21:13.209647  841202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:21:13.211405  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:21:13.211740  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 01:21:13.211739  841202 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:21:13.211827  841202 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 01:21:13.211925  841202 addons.go:69] Setting storage-provisioner=true in profile "kindnet-333616"
	I0917 01:21:13.211938  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:21:13.211959  841202 addons.go:238] Setting addon storage-provisioner=true in "kindnet-333616"
	I0917 01:21:13.211967  841202 addons.go:69] Setting default-storageclass=true in profile "kindnet-333616"
	I0917 01:21:13.211992  841202 host.go:66] Checking if "kindnet-333616" exists ...
	I0917 01:21:13.212000  841202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-333616"
	I0917 01:21:13.212458  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.212600  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.217114  841202 out.go:179] * Verifying Kubernetes components...
	I0917 01:21:13.219705  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:21:13.240699  841202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900219900Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900375290Z" level=info msg="Node configuration value for hugetlb cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900412189Z" level=info msg="Node configuration value for pid cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900479004Z" level=info msg="Node configuration value for memoryswap cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900490617Z" level=info msg="Node configuration value for cgroup v2 is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.906797224Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913464400Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913750835Z" level=info msg="[graphdriver] using prior storage driver: overlay"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.914768261Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917639624Z" level=info msg="Conmon does support the --sync option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917673061Z" level=info msg="Conmon does support the --log-global-size-max option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919571823Z" level=info msg="Using seccomp default profile when unspecified: true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919593931Z" level=info msg="No seccomp profile specified, using the internal default"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919603651Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919611839Z" level=info msg="No blockio config file specified, blockio not configured"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919618928Z" level=info msg="RDT not available in the host system"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924637958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924675992Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.937133863Z" level=fatal msg="too many open files"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:14Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8444 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> kernel <==
	 01:21:14 up  4:03,  0 users,  load average: 2.69, 3.20, 2.37
	Linux default-k8s-diff-port-377743 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:13.904524  846790 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:13.938798  846790 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:13.973910  846790 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:13Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:14.013762  846790 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:14Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:14.053885  846790 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:14Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:14.095164  846790 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:14Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:14.130018  846790 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:14Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:14.164422  846790 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:14Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:14.202835  846790 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:14Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (310.030973ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:14.756688  847395 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "default-k8s-diff-port-377743" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-377743
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-377743:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	        "Created": "2025-09-17T01:18:46.928961651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 834842,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T01:20:19.100007065Z",
	            "FinishedAt": "2025-09-17T01:20:18.156352163Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc/ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc-json.log",
	        "Name": "/default-k8s-diff-port-377743",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-377743:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-377743",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce5cf21d301e88847694e1b22462b90d849471eb2e3c57c80142b9dc7f1b96cc",
	                "LowerDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d-init/diff:/var/lib/docker/overlay2/da2e50720f29bde88d2c0462824f4e1f797ec6bbebf5fbd828a6122c584a848a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/abf84dcf6a36e5c580ce5ed5382c6d2bf4ac87efe09b95f3c2b7cd0df38db94d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-377743",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-377743/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-377743",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-377743",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "501c8109c3bc8c00897c7b54c7d2675ba4a3bb996e4f4f197def146bb8ff190a",
	            "SandboxKey": "/var/run/docker/netns/501c8109c3bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-377743": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:0c:94:b6:e5:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2391a23950fb5471e73c0e959464ddf40474359ad3a94730b27d02f587b2a08a",
	                    "EndpointID": "db62485899e89b194074d4c36c3a42a7db3e7cbeba5ca889e1cc809ec8289fa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-377743",
	                        "ce5cf21d301e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (310.383174ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:15.096187  847552 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-377743 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-333616 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat docker --no-pager                                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/docker/daemon.json                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo docker system info                                                                                                   │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cri-dockerd --version                                                                                                │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ ssh     │ -p auto-333616 sudo systemctl cat containerd --no-pager                                                                                  │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo cat /etc/containerd/config.toml                                                                                      │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo containerd config dump                                                                                               │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo systemctl cat crio --no-pager                                                                                        │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ ssh     │ -p auto-333616 sudo crio config                                                                                                          │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ delete  │ -p auto-333616                                                                                                                           │ auto-333616                  │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │ 17 Sep 25 01:20 UTC │
	│ start   │ -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-333616               │ jenkins │ v1.37.0 │ 17 Sep 25 01:20 UTC │                     │
	│ image   │ default-k8s-diff-port-377743 image list --format=json                                                                                    │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	│ pause   │ -p default-k8s-diff-port-377743 --alsologtostderr -v=1                                                                                   │ default-k8s-diff-port-377743 │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │                     │
	│ image   │ embed-certs-748988 image list --format=json                                                                                              │ embed-certs-748988           │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	│ pause   │ -p embed-certs-748988 --alsologtostderr -v=1                                                                                             │ embed-certs-748988           │ jenkins │ v1.37.0 │ 17 Sep 25 01:21 UTC │ 17 Sep 25 01:21 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:20:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:20:46.991253  841202 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:20:46.991355  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991363  841202 out.go:374] Setting ErrFile to fd 2...
	I0917 01:20:46.991367  841202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:20:46.991948  841202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:20:46.993103  841202 out.go:368] Setting JSON to false
	I0917 01:20:46.994427  841202 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14590,"bootTime":1758057457,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:20:46.994531  841202 start.go:140] virtualization: kvm guest
	I0917 01:20:46.996762  841202 out.go:179] * [kindnet-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:20:46.998033  841202 notify.go:220] Checking for updates...
	I0917 01:20:46.998040  841202 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:20:46.999333  841202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:20:47.000646  841202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:20:47.002223  841202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:20:47.003668  841202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:20:47.005002  841202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:20:47.006954  841202 config.go:182] Loaded profile config "default-k8s-diff-port-377743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007104  841202 config.go:182] Loaded profile config "embed-certs-748988": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007208  841202 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:47.007331  841202 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:20:47.034761  841202 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:20:47.034876  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.096866  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.086442486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.097016  841202 docker.go:318] overlay module found
	I0917 01:20:47.099127  841202 out.go:179] * Using the docker driver based on user configuration
	I0917 01:20:47.100598  841202 start.go:304] selected driver: docker
	I0917 01:20:47.100620  841202 start.go:918] validating driver "docker" against <nil>
	I0917 01:20:47.100634  841202 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:20:47.101213  841202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:20:47.157653  841202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-17 01:20:47.147017932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:20:47.157843  841202 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 01:20:47.158047  841202 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:20:47.159808  841202 out.go:179] * Using Docker driver with root privileges
	I0917 01:20:47.161165  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:47.161185  841202 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 01:20:47.161271  841202 start.go:348] cluster config:
	{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInte
rval:1m0s}
	I0917 01:20:47.162725  841202 out.go:179] * Starting "kindnet-333616" primary control-plane node in "kindnet-333616" cluster
	I0917 01:20:47.164093  841202 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 01:20:47.165424  841202 out.go:179] * Pulling base image v0.0.48 ...
	I0917 01:20:47.166669  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.166713  841202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:20:47.166725  841202 cache.go:58] Caching tarball of preloaded images
	I0917 01:20:47.166780  841202 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 01:20:47.166823  841202 preload.go:172] Found /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:20:47.166834  841202 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:20:47.166922  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:47.166937  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json: {Name:mkd38d1752014f4bab9dae52a7872fb8a5cc71fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:47.192914  841202 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 01:20:47.192938  841202 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 01:20:47.192970  841202 cache.go:232] Successfully downloaded all kic artifacts
	I0917 01:20:47.193004  841202 start.go:360] acquireMachinesLock for kindnet-333616: {Name:mkc24d8ed730ab1614498d5beb0270c845773667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:20:47.193133  841202 start.go:364] duration metric: took 104.991µs to acquireMachinesLock for "kindnet-333616"
	I0917 01:20:47.193181  841202 start.go:93] Provisioning new machine with config: &{Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:20:47.193276  841202 start.go:125] createHost starting for "" (driver="docker")
	W0917 01:20:45.672555  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:47.672815  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:47.195051  841202 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0917 01:20:47.195285  841202 start.go:159] libmachine.API.Create for "kindnet-333616" (driver="docker")
	I0917 01:20:47.195320  841202 client.go:168] LocalClient.Create starting
	I0917 01:20:47.195405  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem
	I0917 01:20:47.195446  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195462  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195517  841202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem
	I0917 01:20:47.195536  841202 main.go:141] libmachine: Decoding PEM data...
	I0917 01:20:47.195549  841202 main.go:141] libmachine: Parsing certificate...
	I0917 01:20:47.195889  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 01:20:47.213519  841202 cli_runner.go:211] docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 01:20:47.213608  841202 network_create.go:284] running [docker network inspect kindnet-333616] to gather additional debugging logs...
	I0917 01:20:47.213640  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616
	W0917 01:20:47.231055  841202 cli_runner.go:211] docker network inspect kindnet-333616 returned with exit code 1
	I0917 01:20:47.231092  841202 network_create.go:287] error running [docker network inspect kindnet-333616]: docker network inspect kindnet-333616: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-333616 not found
	I0917 01:20:47.231127  841202 network_create.go:289] output of [docker network inspect kindnet-333616]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-333616 not found
	
	** /stderr **
	I0917 01:20:47.231231  841202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:47.249036  841202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
	I0917 01:20:47.249865  841202 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f7514a86599 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:c0:7e:cc:23:dc} reservation:<nil>}
	I0917 01:20:47.250378  841202 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0cef36e94e8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0a:db:fd:7a:23:9f} reservation:<nil>}
	I0917 01:20:47.250966  841202 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b9dd3e2b39a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:6a:d6:f0:80:2b} reservation:<nil>}
	I0917 01:20:47.251698  841202 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-2391a23950fb IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:6b:a9:b6:cd:fd} reservation:<nil>}
	I0917 01:20:47.252201  841202 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-2f0a55cba78d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:b8:6b:32:ae:3d} reservation:<nil>}
	I0917 01:20:47.253017  841202 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d90400}
	I0917 01:20:47.253041  841202 network_create.go:124] attempt to create docker network kindnet-333616 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0917 01:20:47.253107  841202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-333616 kindnet-333616
	I0917 01:20:47.313030  841202 network_create.go:108] docker network kindnet-333616 192.168.103.0/24 created
	I0917 01:20:47.313138  841202 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-333616" container
	I0917 01:20:47.313224  841202 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 01:20:47.331726  841202 cli_runner.go:164] Run: docker volume create kindnet-333616 --label name.minikube.sigs.k8s.io=kindnet-333616 --label created_by.minikube.sigs.k8s.io=true
	I0917 01:20:47.350777  841202 oci.go:103] Successfully created a docker volume kindnet-333616
	I0917 01:20:47.350848  841202 cli_runner.go:164] Run: docker run --rm --name kindnet-333616-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --entrypoint /usr/bin/test -v kindnet-333616:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 01:20:47.744926  841202 oci.go:107] Successfully prepared a docker volume kindnet-333616
	I0917 01:20:47.744972  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:47.744994  841202 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 01:20:47.745059  841202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 01:20:53.421561  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:53.456108  834635 retry.go:31] will retry after 11.768849883s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:20:53Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	W0917 01:20:50.174804  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:52.673786  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:52.004993  841202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-333616:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.25985666s)
	I0917 01:20:52.005028  841202 kic.go:203] duration metric: took 4.26003048s to extract preloaded images to volume ...
	W0917 01:20:52.005133  841202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0917 01:20:52.005164  841202 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0917 01:20:52.005202  841202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 01:20:52.066749  841202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-333616 --name kindnet-333616 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-333616 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-333616 --network kindnet-333616 --ip 192.168.103.2 --volume kindnet-333616:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 01:20:52.362306  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Running}}
	I0917 01:20:52.383555  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.406449  841202 cli_runner.go:164] Run: docker exec kindnet-333616 stat /var/lib/dpkg/alternatives/iptables
	I0917 01:20:52.459697  841202 oci.go:144] the created container "kindnet-333616" has a running status.
	I0917 01:20:52.459737  841202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa...
	I0917 01:20:52.716503  841202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 01:20:52.742117  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.761330  841202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 01:20:52.761355  841202 kic_runner.go:114] Args: [docker exec --privileged kindnet-333616 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 01:20:52.809335  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:20:52.831209  841202 machine.go:93] provisionDockerMachine start ...
	I0917 01:20:52.831331  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:52.852889  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:52.853249  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:52.853269  841202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:20:52.992938  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:52.992969  841202 ubuntu.go:182] provisioning hostname "kindnet-333616"
	I0917 01:20:52.993051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.013532  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.013764  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.013778  841202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-333616 && echo "kindnet-333616" | sudo tee /etc/hostname
	I0917 01:20:53.166881  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-333616
	
	I0917 01:20:53.166973  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.187352  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:53.187631  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:53.187658  841202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-333616' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-333616/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-333616' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:20:53.332338  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:20:53.332408  841202 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-517646/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-517646/.minikube}
	I0917 01:20:53.332452  841202 ubuntu.go:190] setting up certificates
	I0917 01:20:53.332472  841202 provision.go:84] configureAuth start
	I0917 01:20:53.332570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:53.352359  841202 provision.go:143] copyHostCerts
	I0917 01:20:53.352466  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem, removing ...
	I0917 01:20:53.352481  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem
	I0917 01:20:53.352553  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/ca.pem (1082 bytes)
	I0917 01:20:53.352652  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem, removing ...
	I0917 01:20:53.352661  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem
	I0917 01:20:53.352689  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/cert.pem (1123 bytes)
	I0917 01:20:53.352759  841202 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem, removing ...
	I0917 01:20:53.352766  841202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem
	I0917 01:20:53.352789  841202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-517646/.minikube/key.pem (1675 bytes)
	I0917 01:20:53.352841  841202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem org=jenkins.kindnet-333616 san=[127.0.0.1 192.168.103.2 kindnet-333616 localhost minikube]
	I0917 01:20:53.973038  841202 provision.go:177] copyRemoteCerts
	I0917 01:20:53.973143  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:20:53.973182  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:53.991696  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.091426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0917 01:20:54.121737  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:20:54.150762  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:20:54.179160  841202 provision.go:87] duration metric: took 846.669603ms to configureAuth
	I0917 01:20:54.179187  841202 ubuntu.go:206] setting minikube options for container-runtime
	I0917 01:20:54.179345  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:20:54.179463  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.198684  841202 main.go:141] libmachine: Using SSH client type: native
	I0917 01:20:54.198909  841202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I0917 01:20:54.198925  841202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:20:54.444483  841202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:20:54.444511  841202 machine.go:96] duration metric: took 1.613270939s to provisionDockerMachine
	I0917 01:20:54.444522  841202 client.go:171] duration metric: took 7.249193748s to LocalClient.Create
	I0917 01:20:54.444542  841202 start.go:167] duration metric: took 7.249257601s to libmachine.API.Create "kindnet-333616"
	I0917 01:20:54.444554  841202 start.go:293] postStartSetup for "kindnet-333616" (driver="docker")
	I0917 01:20:54.444572  841202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:20:54.444641  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:20:54.444690  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.463166  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.563892  841202 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:20:54.567735  841202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 01:20:54.567765  841202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 01:20:54.567772  841202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 01:20:54.567782  841202 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 01:20:54.567795  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/addons for local assets ...
	I0917 01:20:54.567855  841202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-517646/.minikube/files for local assets ...
	I0917 01:20:54.567966  841202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem -> 5212732.pem in /etc/ssl/certs
	I0917 01:20:54.568108  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:20:54.577885  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:54.606690  841202 start.go:296] duration metric: took 162.114963ms for postStartSetup
	I0917 01:20:54.607107  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.625322  841202 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/config.json ...
	I0917 01:20:54.625758  841202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:20:54.625821  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.643332  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.737805  841202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 01:20:54.742465  841202 start.go:128] duration metric: took 7.549168533s to createHost
	I0917 01:20:54.742494  841202 start.go:83] releasing machines lock for "kindnet-333616", held for 7.549346209s
	I0917 01:20:54.742570  841202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-333616
	I0917 01:20:54.759991  841202 ssh_runner.go:195] Run: cat /version.json
	I0917 01:20:54.760051  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.760083  841202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:20:54.760154  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:20:54.778915  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.779306  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:20:54.952563  841202 ssh_runner.go:195] Run: systemctl --version
	I0917 01:20:54.957470  841202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:20:55.101309  841202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 01:20:55.106493  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.131742  841202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 01:20:55.131831  841202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:20:55.164272  841202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 01:20:55.164303  841202 start.go:495] detecting cgroup driver to use...
	I0917 01:20:55.164352  841202 detect.go:190] detected "systemd" cgroup driver on host os
	I0917 01:20:55.164430  841202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:20:55.182732  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:20:55.194856  841202 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:20:55.194918  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:20:55.209368  841202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:20:55.224908  841202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:20:55.294219  841202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:20:55.366744  841202 docker.go:234] disabling docker service ...
	I0917 01:20:55.366805  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:20:55.386004  841202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:20:55.398281  841202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:20:55.471097  841202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:20:55.620605  841202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:20:55.632936  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:20:55.650751  841202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:20:55.650813  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.665355  841202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0917 01:20:55.665449  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.677774  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.688724  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.700141  841202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:20:55.711135  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.722974  841202 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.741236  841202 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:20:55.752869  841202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:20:55.762991  841202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:20:55.772774  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:55.842833  841202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:20:55.939370  841202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:20:55.939456  841202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:20:55.943491  841202 start.go:563] Will wait 60s for crictl version
	I0917 01:20:55.943562  841202 ssh_runner.go:195] Run: which crictl
	I0917 01:20:55.947384  841202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:20:55.984137  841202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 01:20:55.984206  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.022652  841202 ssh_runner.go:195] Run: crio --version
	I0917 01:20:56.062561  841202 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 01:20:56.063985  841202 cli_runner.go:164] Run: docker network inspect kindnet-333616 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 01:20:56.081880  841202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0917 01:20:56.086073  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.098482  841202 kubeadm.go:875] updating cluster {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:20:56.098622  841202 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:20:56.098685  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.169870  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.169898  841202 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:20:56.169953  841202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:20:56.206753  841202 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:20:56.206784  841202 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:20:56.206794  841202 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 crio true true} ...
	I0917 01:20:56.206913  841202 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-333616 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0917 01:20:56.207001  841202 ssh_runner.go:195] Run: crio config
	I0917 01:20:56.253538  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:20:56.253567  841202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:20:56.253590  841202 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-333616 NodeName:kindnet-333616 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:20:56.253716  841202 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-333616"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:20:56.253775  841202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:20:56.264146  841202 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:20:56.264224  841202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:20:56.274749  841202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0917 01:20:56.293906  841202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:20:56.316487  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0917 01:20:56.336550  841202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:20:56.340325  841202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:20:56.352936  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:20:56.418882  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:20:56.445037  841202 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616 for IP: 192.168.103.2
	I0917 01:20:56.445069  841202 certs.go:194] generating shared ca certs ...
	I0917 01:20:56.445096  841202 certs.go:226] acquiring lock for ca certs: {Name:mkf3f2f0e48b0ec5863c5315ffee9c1298be3559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.445265  841202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key
	I0917 01:20:56.445328  841202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key
	I0917 01:20:56.445342  841202 certs.go:256] generating profile certs ...
	I0917 01:20:56.445433  841202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key
	I0917 01:20:56.445452  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt with IP's: []
	I0917 01:20:56.575658  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt ...
	I0917 01:20:56.575692  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.crt: {Name:mke4c01e2ad680ec95da34129972695bc352dc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.575918  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key ...
	I0917 01:20:56.575935  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/client.key: {Name:mk196e199bf8e509067e257fa5978cc4017a9515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.576063  841202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883
	I0917 01:20:56.576083  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0917 01:20:56.891743  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 ...
	I0917 01:20:56.891776  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883: {Name:mk080638a3e062c43555f3e1bbede660cca9c8ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.891955  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 ...
	I0917 01:20:56.891969  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883: {Name:mkbe71ad29db0d31be773639ab90fdd03d84b089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:56.892043  841202 certs.go:381] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt
	I0917 01:20:56.892145  841202 certs.go:385] copying /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key.1c371883 -> /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key
	I0917 01:20:56.892212  841202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key
	I0917 01:20:56.892228  841202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt with IP's: []
	W0917 01:20:55.172587  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:57.173997  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	W0917 01:20:59.673374  832418 pod_ready.go:104] pod "coredns-66bc5c9577-qqxrk" is not "Ready", error: <nil>
	I0917 01:20:57.205489  841202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt ...
	I0917 01:20:57.205524  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt: {Name:mkf6b5ecd44d0faf20e6e53acc7eeebe333eca17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205728  841202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key ...
	I0917 01:20:57.205746  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key: {Name:mk2b3f753e527ada6b46c8fd672f3b210e243668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:20:57.205983  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem (1338 bytes)
	W0917 01:20:57.206033  841202 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273_empty.pem, impossibly tiny 0 bytes
	I0917 01:20:57.206049  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:20:57.206079  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:20:57.206110  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:20:57.206143  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/certs/key.pem (1675 bytes)
	I0917 01:20:57.206196  841202 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem (1708 bytes)
	I0917 01:20:57.206849  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:20:57.236316  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:20:57.264039  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:20:57.290903  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:20:57.316649  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:20:57.343336  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:20:57.369426  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:20:57.395757  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kindnet-333616/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:20:57.422129  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/ssl/certs/5212732.pem --> /usr/share/ca-certificates/5212732.pem (1708 bytes)
	I0917 01:20:57.452169  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:20:57.479060  841202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-517646/.minikube/certs/521273.pem --> /usr/share/ca-certificates/521273.pem (1338 bytes)
	I0917 01:20:57.505045  841202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:20:57.524210  841202 ssh_runner.go:195] Run: openssl version
	I0917 01:20:57.530236  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:20:57.540421  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544062  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.544118  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:20:57.551188  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:20:57.561515  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/521273.pem && ln -fs /usr/share/ca-certificates/521273.pem /etc/ssl/certs/521273.pem"
	I0917 01:20:57.572283  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576261  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:09 /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.576323  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/521273.pem
	I0917 01:20:57.583692  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/521273.pem /etc/ssl/certs/51391683.0"
	I0917 01:20:57.593924  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5212732.pem && ln -fs /usr/share/ca-certificates/5212732.pem /etc/ssl/certs/5212732.pem"
	I0917 01:20:57.604001  841202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608154  841202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:09 /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.608211  841202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5212732.pem
	I0917 01:20:57.615475  841202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5212732.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:20:57.625656  841202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:20:57.629541  841202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:20:57.629606  841202 kubeadm.go:392] StartCluster: {Name:kindnet-333616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kindnet-333616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:20:57.629685  841202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:20:57.629748  841202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:20:57.668306  841202 cri.go:89] found id: ""
	I0917 01:20:57.668384  841202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:20:57.679315  841202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:20:57.689592  841202 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 01:20:57.689666  841202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:20:57.699255  841202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:20:57.699272  841202 kubeadm.go:157] found existing configuration files:
	
	I0917 01:20:57.699327  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:20:57.708879  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:20:57.708950  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:20:57.718406  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:20:57.728172  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:20:57.728251  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:20:57.737991  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.748427  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:20:57.748487  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:20:57.757822  841202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:20:57.767640  841202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:20:57.767708  841202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:20:57.776934  841202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 01:20:57.849477  841202 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0917 01:20:57.909176  841202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:21:01.172820  832418 pod_ready.go:94] pod "coredns-66bc5c9577-qqxrk" is "Ready"
	I0917 01:21:01.172851  832418 pod_ready.go:86] duration metric: took 38.505527826s for pod "coredns-66bc5c9577-qqxrk" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.175617  832418 pod_ready.go:83] waiting for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.179752  832418 pod_ready.go:94] pod "etcd-embed-certs-748988" is "Ready"
	I0917 01:21:01.179779  832418 pod_ready.go:86] duration metric: took 4.135657ms for pod "etcd-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.182426  832418 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.186899  832418 pod_ready.go:94] pod "kube-apiserver-embed-certs-748988" is "Ready"
	I0917 01:21:01.186928  832418 pod_ready.go:86] duration metric: took 4.474792ms for pod "kube-apiserver-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.189100  832418 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.371319  832418 pod_ready.go:94] pod "kube-controller-manager-embed-certs-748988" is "Ready"
	I0917 01:21:01.371352  832418 pod_ready.go:86] duration metric: took 182.22498ms for pod "kube-controller-manager-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.570958  832418 pod_ready.go:83] waiting for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:01.970376  832418 pod_ready.go:94] pod "kube-proxy-2bkdq" is "Ready"
	I0917 01:21:01.970432  832418 pod_ready.go:86] duration metric: took 399.444446ms for pod "kube-proxy-2bkdq" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.171077  832418 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570435  832418 pod_ready.go:94] pod "kube-scheduler-embed-certs-748988" is "Ready"
	I0917 01:21:02.570467  832418 pod_ready.go:86] duration metric: took 399.360883ms for pod "kube-scheduler-embed-certs-748988" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:21:02.570484  832418 pod_ready.go:40] duration metric: took 39.908444834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:21:02.617522  832418 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:21:02.619899  832418 out.go:179] * Done! kubectl is now configured to use "embed-certs-748988" cluster and "default" namespace by default
	I0917 01:21:05.225533  834635 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:05.270428  834635 out.go:203] 
	W0917 01:21:05.271803  834635 out.go:285] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:05Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	
	W0917 01:21:05.271827  834635 out.go:285] * 
	W0917 01:21:05.273977  834635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:21:05.275509  834635 out.go:203] 
	I0917 01:21:02.051660  819928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:21:02.085236  819928 retry.go:31] will retry after 15.073168141s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:02Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	I0917 01:21:08.749313  841202 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 01:21:08.749411  841202 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 01:21:08.749519  841202 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 01:21:08.749589  841202 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0917 01:21:08.749650  841202 kubeadm.go:310] OS: Linux
	I0917 01:21:08.749713  841202 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 01:21:08.749779  841202 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 01:21:08.749841  841202 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 01:21:08.749902  841202 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 01:21:08.749959  841202 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 01:21:08.750017  841202 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 01:21:08.750085  841202 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 01:21:08.750143  841202 kubeadm.go:310] CGROUPS_IO: enabled
	I0917 01:21:08.750240  841202 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 01:21:08.750408  841202 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 01:21:08.750528  841202 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 01:21:08.750612  841202 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 01:21:08.752776  841202 out.go:252]   - Generating certificates and keys ...
	I0917 01:21:08.752899  841202 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 01:21:08.752994  841202 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 01:21:08.753166  841202 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 01:21:08.753271  841202 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 01:21:08.753363  841202 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 01:21:08.753458  841202 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 01:21:08.753543  841202 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 01:21:08.753685  841202 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.753763  841202 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 01:21:08.753955  841202 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-333616 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0917 01:21:08.754090  841202 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 01:21:08.754192  841202 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 01:21:08.754257  841202 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 01:21:08.754342  841202 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 01:21:08.754430  841202 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 01:21:08.754478  841202 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 01:21:08.754527  841202 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 01:21:08.754580  841202 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 01:21:08.754625  841202 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 01:21:08.754700  841202 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 01:21:08.754755  841202 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 01:21:08.756322  841202 out.go:252]   - Booting up control plane ...
	I0917 01:21:08.756479  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 01:21:08.756610  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 01:21:08.756707  841202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 01:21:08.756865  841202 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 01:21:08.756981  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 01:21:08.757139  841202 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 01:21:08.757242  841202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 01:21:08.757292  841202 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 01:21:08.757475  841202 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 01:21:08.757598  841202 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 01:21:08.757667  841202 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.884368ms
	I0917 01:21:08.757780  841202 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 01:21:08.757913  841202 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0917 01:21:08.758047  841202 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 01:21:08.758174  841202 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 01:21:08.758291  841202 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.005156484s
	I0917 01:21:08.758398  841202 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.505889566s
	I0917 01:21:08.758508  841202 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.501611145s
	I0917 01:21:08.758646  841202 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 01:21:08.758798  841202 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 01:21:08.758886  841202 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 01:21:08.759100  841202 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-333616 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 01:21:08.759198  841202 kubeadm.go:310] [bootstrap-token] Using token: 162lgr.l6wrgxxcju3qv1m6
	I0917 01:21:08.760426  841202 out.go:252]   - Configuring RBAC rules ...
	I0917 01:21:08.760541  841202 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 01:21:08.760645  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 01:21:08.760852  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 01:21:08.761023  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 01:21:08.761194  841202 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 01:21:08.761327  841202 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 01:21:08.761559  841202 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 01:21:08.761636  841202 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 01:21:08.761697  841202 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 01:21:08.761708  841202 kubeadm.go:310] 
	I0917 01:21:08.761785  841202 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 01:21:08.761796  841202 kubeadm.go:310] 
	I0917 01:21:08.761916  841202 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 01:21:08.761932  841202 kubeadm.go:310] 
	I0917 01:21:08.761974  841202 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 01:21:08.762071  841202 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 01:21:08.762135  841202 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 01:21:08.762145  841202 kubeadm.go:310] 
	I0917 01:21:08.762215  841202 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 01:21:08.762222  841202 kubeadm.go:310] 
	I0917 01:21:08.762262  841202 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 01:21:08.762269  841202 kubeadm.go:310] 
	I0917 01:21:08.762319  841202 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 01:21:08.762431  841202 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 01:21:08.762533  841202 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 01:21:08.762551  841202 kubeadm.go:310] 
	I0917 01:21:08.762669  841202 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 01:21:08.762785  841202 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 01:21:08.762797  841202 kubeadm.go:310] 
	I0917 01:21:08.762899  841202 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763036  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 \
	I0917 01:21:08.763072  841202 kubeadm.go:310] 	--control-plane 
	I0917 01:21:08.763080  841202 kubeadm.go:310] 
	I0917 01:21:08.763190  841202 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 01:21:08.763210  841202 kubeadm.go:310] 
	I0917 01:21:08.763278  841202 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 162lgr.l6wrgxxcju3qv1m6 \
	I0917 01:21:08.763415  841202 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:641c59b7ee1e7e3293d3a99db89ca94b4100a3d7db52d4afb7d1b842d462ab66 
	I0917 01:21:08.763437  841202 cni.go:84] Creating CNI manager for "kindnet"
	I0917 01:21:08.766700  841202 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 01:21:08.767858  841202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 01:21:08.773343  841202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 01:21:08.773364  841202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 01:21:08.793795  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 01:21:09.025565  841202 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 01:21:09.025804  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:09.025927  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-333616 minikube.k8s.io/updated_at=2025_09_17T01_21_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=kindnet-333616 minikube.k8s.io/primary=true
	I0917 01:21:09.125386  841202 ops.go:34] apiserver oom_adj: -16
	I0917 01:21:09.125519  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:09.626138  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:10.126613  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:10.626037  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:11.126442  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:11.626219  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:12.125827  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:12.626205  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:13.126607  841202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:21:13.209490  841202 kubeadm.go:1105] duration metric: took 4.183732835s to wait for elevateKubeSystemPrivileges
	I0917 01:21:13.209537  841202 kubeadm.go:394] duration metric: took 15.579926785s to StartCluster
	I0917 01:21:13.209560  841202 settings.go:142] acquiring lock: {Name:mk3b4e5824fb8718eece00dc70a9d05f0af2a028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:21:13.209647  841202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:21:13.211405  841202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/kubeconfig: {Name:mk810ab61e25787f671ea0b59c42f89e48d9385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:21:13.211740  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 01:21:13.211739  841202 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:21:13.211827  841202 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 01:21:13.211925  841202 addons.go:69] Setting storage-provisioner=true in profile "kindnet-333616"
	I0917 01:21:13.211938  841202 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:21:13.211959  841202 addons.go:238] Setting addon storage-provisioner=true in "kindnet-333616"
	I0917 01:21:13.211967  841202 addons.go:69] Setting default-storageclass=true in profile "kindnet-333616"
	I0917 01:21:13.211992  841202 host.go:66] Checking if "kindnet-333616" exists ...
	I0917 01:21:13.212000  841202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-333616"
	I0917 01:21:13.212458  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.212600  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.217114  841202 out.go:179] * Verifying Kubernetes components...
	I0917 01:21:13.219705  841202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:21:13.240699  841202 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 01:21:13.241758  841202 addons.go:238] Setting addon default-storageclass=true in "kindnet-333616"
	I0917 01:21:13.242304  841202 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:21:13.242325  841202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 01:21:13.242400  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:21:13.243681  841202 host.go:66] Checking if "kindnet-333616" exists ...
	I0917 01:21:13.244225  841202 cli_runner.go:164] Run: docker container inspect kindnet-333616 --format={{.State.Status}}
	I0917 01:21:13.282147  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:21:13.285590  841202 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 01:21:13.285680  841202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 01:21:13.285779  841202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-333616
	I0917 01:21:13.310642  841202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/kindnet-333616/id_rsa Username:docker}
	I0917 01:21:13.331185  841202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 01:21:13.371036  841202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:21:13.413176  841202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:21:13.435107  841202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 01:21:13.535558  841202 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0917 01:21:13.538037  841202 node_ready.go:35] waiting up to 15m0s for node "kindnet-333616" to be "Ready" ...
	I0917 01:21:13.774449  841202 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:23 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900219900Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900375290Z" level=info msg="Node configuration value for hugetlb cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900412189Z" level=info msg="Node configuration value for pid cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900479004Z" level=info msg="Node configuration value for memoryswap cgroup is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.900490617Z" level=info msg="Node configuration value for cgroup v2 is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.906797224Z" level=info msg="Node configuration value for systemd CollectMode is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913464400Z" level=info msg="Node configuration value for systemd AllowedCPUs is true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.913750835Z" level=info msg="[graphdriver] using prior storage driver: overlay"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.914768261Z" level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917639624Z" level=info msg="Conmon does support the --sync option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.917673061Z" level=info msg="Conmon does support the --log-global-size-max option"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919571823Z" level=info msg="Using seccomp default profile when unspecified: true"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919593931Z" level=info msg="No seccomp profile specified, using the internal default"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919603651Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919611839Z" level=info msg="No blockio config file specified, blockio not configured"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.919618928Z" level=info msg="RDT not available in the host system"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924637958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.924675992Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
	Sep 17 01:20:24 default-k8s-diff-port-377743 crio[521]: time="2025-09-17 01:20:24.937133863Z" level=fatal msg="too many open files"
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
	Sep 17 01:20:24 default-k8s-diff-port-377743 systemd[1]: crio.service: Failed with result 'exit-code'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8444 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.003350] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996938] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.503895] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +1.500698] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.996505] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[  +0.051405] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 16 85 9f b9 a5 08 06
	[  +0.000473] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 02 3b bc ba ae 08 06
	[  +0.452658] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev vethf1701049
	[ +23.039791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +2.000822] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998771] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.502900] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.498360] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.998791] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.003444] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.997565] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.503051] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.496535] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +1.000842] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.004289] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.995906] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	[  +0.504963] IPv4: martian destination 127.0.0.11 from 10.244.0.4, dev veth0ead9b9d
	
	
	==> kernel <==
	 01:21:15 up  4:03,  0 users,  load average: 2.56, 3.16, 2.37
	Linux default-k8s-diff-port-377743 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:15.434930  847766 logs.go:279] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.470974  847766 logs.go:279] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.506260  847766 logs.go:279] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.542787  847766 logs.go:279] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.577070  847766 logs.go:279] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.611419  847766 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.644363  847766 logs.go:279] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.676955  847766 logs.go:279] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""
	E0917 01:21:15.710508  847766 logs.go:279] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-09-17T01:21:15Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 6 (305.94595ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:21:16.263469  848335 status.go:458] kubeconfig endpoint: get endpoint: "default-k8s-diff-port-377743" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "default-k8s-diff-port-377743" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.83s)

                                                
                                    

Test pass (271/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.75
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.0/json-events 4.35
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.48
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.37
20 TestDownloadOnlyKic 1.83
21 TestBinaryMirror 0.87
22 TestOffline 65.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 406.15
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 84.53
36 TestAddons/parallel/RegistryCreds 0.68
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.68
42 TestAddons/parallel/Headlamp 121.57
43 TestAddons/parallel/CloudSpanner 5.51
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
48 TestAddons/StoppedEnableDisable 23.33
49 TestCertOptions 31.58
50 TestCertExpiration 216.04
52 TestForceSystemdFlag 24.95
53 TestForceSystemdEnv 27.63
55 TestKVMDriverInstallOrUpdate 1.66
59 TestErrorSpam/setup 21.1
60 TestErrorSpam/start 0.67
61 TestErrorSpam/status 0.95
62 TestErrorSpam/pause 1.6
63 TestErrorSpam/unpause 1.68
64 TestErrorSpam/stop 2.59
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 39.38
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 7.02
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.37
76 TestFunctional/serial/CacheCmd/cache/add_local 1.06
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 45.04
85 TestFunctional/serial/ComponentHealth 0.08
86 TestFunctional/serial/LogsCmd 1.58
87 TestFunctional/serial/LogsFileCmd 1.51
88 TestFunctional/serial/InvalidService 4.2
90 TestFunctional/parallel/ConfigCmd 0.41
92 TestFunctional/parallel/DryRun 0.38
93 TestFunctional/parallel/InternationalLanguage 0.17
94 TestFunctional/parallel/StatusCmd 1.01
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.56
103 TestFunctional/parallel/CpCmd 1.89
105 TestFunctional/parallel/FileSync 0.31
106 TestFunctional/parallel/CertSync 1.84
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
114 TestFunctional/parallel/License 0.27
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.55
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
122 TestFunctional/parallel/ImageCommands/ImageBuild 2.76
123 TestFunctional/parallel/ImageCommands/Setup 0.48
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
131 TestFunctional/parallel/ProfileCmd/profile_list 0.42
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
135 TestFunctional/parallel/MountCmd/any-port 33.64
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
138 TestFunctional/parallel/MountCmd/specific-port 1.72
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ServiceCmd/List 1.7
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 116.22
163 TestMultiControlPlane/serial/DeployApp 5.31
164 TestMultiControlPlane/serial/PingHostFromPods 1.17
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.74
176 TestMultiControlPlane/serial/StopCluster 29.74
181 TestJSONOutput/start/Command 38.94
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.79
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 16.15
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 31.42
207 TestKicCustomNetwork/use_default_bridge_network 24.16
208 TestKicExistingNetwork 24.76
209 TestKicCustomSubnet 24.25
210 TestKicStaticIP 24.81
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 49.67
215 TestMountStart/serial/StartWithMountFirst 5.68
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.23
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.68
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.19
222 TestMountStart/serial/RestartStopped 7.27
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 122.14
227 TestMultiNode/serial/DeployApp2Nodes 4.67
228 TestMultiNode/serial/PingHostFrom2Pods 0.8
229 TestMultiNode/serial/AddNode 54.3
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.66
232 TestMultiNode/serial/CopyFile 9.67
233 TestMultiNode/serial/StopNode 2.29
234 TestMultiNode/serial/StartAfterStop 7.22
235 TestMultiNode/serial/RestartKeepsNodes 75.12
236 TestMultiNode/serial/DeleteNode 5.31
237 TestMultiNode/serial/StopMultiNode 28.85
238 TestMultiNode/serial/RestartMultiNode 50.94
239 TestMultiNode/serial/ValidateNameConflict 24.54
244 TestPreload 113.86
246 TestScheduledStopUnix 96.59
249 TestInsufficientStorage 9.46
250 TestRunningBinaryUpgrade 74.27
253 TestMissingContainerUpgrade 77.85
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestStoppedBinaryUpgrade/Setup 0.64
264 TestNoKubernetes/serial/StartWithK8s 44.47
265 TestStoppedBinaryUpgrade/Upgrade 62.84
266 TestNoKubernetes/serial/StartWithStopK8s 25.41
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
268 TestNoKubernetes/serial/Start 11.46
270 TestPause/serial/Start 43.99
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
272 TestNoKubernetes/serial/ProfileList 4.83
273 TestNoKubernetes/serial/Stop 1.22
274 TestNoKubernetes/serial/StartNoArgs 7.23
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
276 TestPause/serial/SecondStartNoReconfiguration 10.59
284 TestNetworkPlugins/group/false 3.64
285 TestPause/serial/Pause 0.8
286 TestPause/serial/VerifyStatus 0.35
287 TestPause/serial/Unpause 0.71
288 TestPause/serial/PauseAgain 0.8
289 TestPause/serial/DeletePaused 2.81
293 TestPause/serial/VerifyDeletedResources 17.75
295 TestStartStop/group/old-k8s-version/serial/FirstStart 49.76
297 TestStartStop/group/no-preload/serial/FirstStart 82.97
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.27
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
300 TestStartStop/group/old-k8s-version/serial/Stop 16.02
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
302 TestStartStop/group/old-k8s-version/serial/SecondStart 44.1
303 TestStartStop/group/no-preload/serial/DeployApp 9.27
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
305 TestStartStop/group/no-preload/serial/Stop 16.47
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/no-preload/serial/SecondStart 43.83
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
311 TestStartStop/group/old-k8s-version/serial/Pause 3.15
313 TestStartStop/group/embed-certs/serial/FirstStart 109.78
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/no-preload/serial/Pause 2.99
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.44
321 TestStartStop/group/newest-cni/serial/FirstStart 30.04
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.79
324 TestStartStop/group/newest-cni/serial/Stop 2.41
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/newest-cni/serial/SecondStart 15.06
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
330 TestStartStop/group/newest-cni/serial/Pause 2.6
331 TestNetworkPlugins/group/auto/Start 40.07
332 TestStartStop/group/embed-certs/serial/DeployApp 8.25
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
334 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.86
335 TestStartStop/group/embed-certs/serial/Stop 16.48
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.34
338 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/embed-certs/serial/SecondStart 52.88
340 TestNetworkPlugins/group/auto/KubeletFlags 0.35
341 TestNetworkPlugins/group/auto/NetCatPod 9.25
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
344 TestNetworkPlugins/group/auto/DNS 0.15
345 TestNetworkPlugins/group/auto/Localhost 0.15
346 TestNetworkPlugins/group/auto/HairPin 0.13
347 TestNetworkPlugins/group/kindnet/Start 41.5
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
354 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
355 TestStartStop/group/embed-certs/serial/Pause 2.96
356 TestNetworkPlugins/group/calico/Start 84.85
357 TestNetworkPlugins/group/custom-flannel/Start 87.51
358 TestNetworkPlugins/group/bridge/Start 68.76
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
361 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
362 TestNetworkPlugins/group/kindnet/DNS 0.26
363 TestNetworkPlugins/group/kindnet/Localhost 0.21
364 TestNetworkPlugins/group/kindnet/HairPin 0.18
365 TestNetworkPlugins/group/flannel/Start 114.95
366 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
367 TestNetworkPlugins/group/bridge/NetCatPod 9.28
368 TestNetworkPlugins/group/bridge/DNS 0.15
369 TestNetworkPlugins/group/bridge/Localhost 0.12
370 TestNetworkPlugins/group/bridge/HairPin 0.12
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
374 TestNetworkPlugins/group/calico/KubeletFlags 0.34
375 TestNetworkPlugins/group/calico/NetCatPod 9.24
376 TestNetworkPlugins/group/custom-flannel/DNS 0.14
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
379 TestNetworkPlugins/group/calico/DNS 0.16
380 TestNetworkPlugins/group/calico/Localhost 0.12
381 TestNetworkPlugins/group/calico/HairPin 0.13
382 TestNetworkPlugins/group/enable-default-cni/Start 32.66
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
390 TestNetworkPlugins/group/flannel/NetCatPod 9.19
391 TestNetworkPlugins/group/flannel/DNS 0.13
392 TestNetworkPlugins/group/flannel/Localhost 0.11
393 TestNetworkPlugins/group/flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (5.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-997829 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-997829 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.746111741s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0916 23:48:18.321748  521273 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0916 23:48:18.321872  521273 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-997829
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-997829: exit status 85 (65.871009ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-997829 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-997829 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:12.624771  521285 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:12.624886  521285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:12.624891  521285 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:12.624895  521285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:12.625115  521285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	W0916 23:48:12.625263  521285 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21550-517646/.minikube/config/config.json: open /home/jenkins/minikube-integration/21550-517646/.minikube/config/config.json: no such file or directory
	I0916 23:48:12.625798  521285 out.go:368] Setting JSON to true
	I0916 23:48:12.626916  521285 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9036,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:12.627027  521285 start.go:140] virtualization: kvm guest
	I0916 23:48:12.629276  521285 out.go:99] [download-only-997829] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:12.629469  521285 notify.go:220] Checking for updates...
	W0916 23:48:12.629513  521285 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 23:48:12.630977  521285 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:48:12.632950  521285 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:12.635143  521285 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:12.636655  521285 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:12.637923  521285 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:48:12.640200  521285 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:48:12.640514  521285 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:12.668677  521285 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:12.668818  521285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:12.728538  521285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:12.718114005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:12.728653  521285 docker.go:318] overlay module found
	I0916 23:48:12.730396  521285 out.go:99] Using the docker driver based on user configuration
	I0916 23:48:12.730426  521285 start.go:304] selected driver: docker
	I0916 23:48:12.730434  521285 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:12.730566  521285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:12.789593  521285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:12.779000158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:12.789783  521285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:12.790342  521285 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0916 23:48:12.790538  521285 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:48:12.792306  521285 out.go:171] Using Docker driver with root privileges
	I0916 23:48:12.793837  521285 cni.go:84] Creating CNI manager for ""
	I0916 23:48:12.793924  521285 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0916 23:48:12.793939  521285 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 23:48:12.794075  521285 start.go:348] cluster config:
	{Name:download-only-997829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-997829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:48:12.795554  521285 out.go:99] Starting "download-only-997829" primary control-plane node in "download-only-997829" cluster
	I0916 23:48:12.795592  521285 cache.go:123] Beginning downloading kic base image for docker with crio
	I0916 23:48:12.796940  521285 out.go:99] Pulling base image v0.0.48 ...
	I0916 23:48:12.796978  521285 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0916 23:48:12.797114  521285 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0916 23:48:12.817062  521285 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:12.817302  521285 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0916 23:48:12.817440  521285 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0916 23:48:12.818678  521285 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:12.818705  521285 cache.go:58] Caching tarball of preloaded images
	I0916 23:48:12.818866  521285 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0916 23:48:12.820698  521285 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0916 23:48:12.820725  521285 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 23:48:12.850136  521285 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:48:15.824569  521285 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 23:48:15.824663  521285 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 23:48:16.794354  521285 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0916 23:48:16.794838  521285 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/download-only-997829/config.json ...
	I0916 23:48:16.794885  521285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/download-only-997829/config.json: {Name:mkf9731b466806723f337d01f87f673ddcbef3ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:48:16.795071  521285 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0916 23:48:16.795216  521285 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21550-517646/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-997829 host does not exist
	  To start a cluster, run: "minikube start -p download-only-997829"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-997829
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-515641 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-515641 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.354467426s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0916 23:48:23.134444  521273 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0916 23:48:23.134492  521273 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-517646/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-515641
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-515641: exit status 85 (70.174813ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-997829 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-997829 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ delete  │ -p download-only-997829                                                                                                                                                   │ download-only-997829 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │ 16 Sep 25 23:48 UTC │
	│ start   │ -o=json --download-only -p download-only-515641 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-515641 │ jenkins │ v1.37.0 │ 16 Sep 25 23:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:48:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:48:18.825574  521630 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:48:18.825851  521630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:18.825861  521630 out.go:374] Setting ErrFile to fd 2...
	I0916 23:48:18.825865  521630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:48:18.826076  521630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0916 23:48:18.826619  521630 out.go:368] Setting JSON to true
	I0916 23:48:18.829430  521630 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9042,"bootTime":1758057457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:48:18.829626  521630 start.go:140] virtualization: kvm guest
	I0916 23:48:18.832084  521630 out.go:99] [download-only-515641] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:48:18.832500  521630 notify.go:220] Checking for updates...
	I0916 23:48:18.834135  521630 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:48:18.836043  521630 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:48:18.837890  521630 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0916 23:48:18.839506  521630 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0916 23:48:18.840885  521630 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:48:18.843423  521630 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:48:18.843788  521630 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:48:18.869043  521630 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0916 23:48:18.869169  521630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:18.928150  521630 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:18.917727484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:18.928252  521630 docker.go:318] overlay module found
	I0916 23:48:18.930070  521630 out.go:99] Using the docker driver based on user configuration
	I0916 23:48:18.930110  521630 start.go:304] selected driver: docker
	I0916 23:48:18.930117  521630 start.go:918] validating driver "docker" against <nil>
	I0916 23:48:18.930215  521630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 23:48:18.986061  521630 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-16 23:48:18.975872427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 23:48:18.986276  521630 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:48:18.986833  521630 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0916 23:48:18.986998  521630 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:48:18.988671  521630 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-515641 host does not exist
	  To start a cluster, run: "minikube start -p download-only-515641"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-515641
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.83s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-660125 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-660125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-660125
--- PASS: TestDownloadOnlyKic (1.83s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
I0916 23:48:26.747062  521273 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-785971 --alsologtostderr --binary-mirror http://127.0.0.1:38515 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-785971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-785971
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (65.61s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-226425 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-226425 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m3.003210082s)
helpers_test.go:175: Cleaning up "offline-crio-226425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-226425
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-226425: (2.610985427s)
--- PASS: TestOffline (65.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-069011
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-069011: exit status 85 (58.980801ms)

                                                
                                                
-- stdout --
	* Profile "addons-069011" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-069011"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-069011
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-069011: exit status 85 (59.765083ms)

                                                
                                                
-- stdout --
	* Profile "addons-069011" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-069011"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (406.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-069011 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m46.146923705s)
--- PASS: TestAddons/Setup (406.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-069011 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-069011 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (84.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-069011 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-069011 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ce6e9e14-7432-498a-a877-a0187553b840] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ce6e9e14-7432-498a-a877-a0187553b840] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 1m24.003803168s
addons_test.go:694: (dbg) Run:  kubectl --context addons-069011 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-069011 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-069011 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (84.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.506448ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-069011
addons_test.go:332: (dbg) Run:  kubectl --context addons-069011 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-g862x" [111ec2ae-1eb6-4c42-a864-2a8f1e4a795a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003595404s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.696418ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-bdljp" [6c84974f-9dfb-4207-9719-f79066d8117f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004032909s
addons_test.go:463: (dbg) Run:  kubectl --context addons-069011 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (121.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-069011 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-zsqjc" [55a5f00e-97bb-4a4e-97c9-956de534037c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-zsqjc" [55a5f00e-97bb-4a4e-97c9-956de534037c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m55.003625478s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-069011 addons disable headlamp --alsologtostderr -v=1: (5.715708541s)
--- PASS: TestAddons/parallel/Headlamp (121.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-wtp6g" [9b1e7a9d-f6c1-46d3-81bb-2ad1a9de3762] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003298442s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vkzmn" [95694fda-47ed-4239-9097-bd2c9132ef3d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005043339s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (23.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-069011
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-069011: (23.050443602s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-069011
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-069011
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-069011
--- PASS: TestAddons/StoppedEnableDisable (23.33s)

                                                
                                    
x
+
TestCertOptions (31.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-081780 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-081780 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.507390925s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-081780 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-081780 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-081780 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-081780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-081780
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-081780: (2.41450023s)
--- PASS: TestCertOptions (31.58s)

                                                
                                    
x
+
TestCertExpiration (216.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-186876 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-186876 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.630842545s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-186876 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-186876 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.942038207s)
helpers_test.go:175: Cleaning up "cert-expiration-186876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-186876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-186876: (2.471018836s)
--- PASS: TestCertExpiration (216.04s)

                                                
                                    
x
+
TestForceSystemdFlag (24.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-642641 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0917 01:15:14.436714  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-642641 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.200382309s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-642641 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-642641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-642641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-642641: (2.46131289s)
--- PASS: TestForceSystemdFlag (24.95s)

                                                
                                    
x
+
TestForceSystemdEnv (27.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-458374 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-458374 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.654130469s)
helpers_test.go:175: Cleaning up "force-systemd-env-458374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-458374
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-458374: (4.97173426s)
--- PASS: TestForceSystemdEnv (27.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.66s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0917 01:15:04.799562  521273 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 01:15:04.799725  521273 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0917 01:15:04.830415  521273 install.go:62] docker-machine-driver-kvm2: exit status 1
W0917 01:15:04.830566  521273 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 01:15:04.830621  521273 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3022582839/001/docker-machine-driver-kvm2
I0917 01:15:05.093714  521273 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3022582839/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc0003c8cc0 gz:0xc0003c8cc8 tar:0xc0003c8c40 tar.bz2:0xc0003c8c50 tar.gz:0xc0003c8c60 tar.xz:0xc0003c8c90 tar.zst:0xc0003c8ca0 tbz2:0xc0003c8c50 tgz:0xc0003c8c60 txz:0xc0003c8c90 tzst:0xc0003c8ca0 xz:0xc0003c8ce0 zip:0xc0003c8cf0 zst:0xc0003c8ce8] Getters:map[file:0xc001702260 http:0xc000072a50 https:0xc000072aa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 01:15:05.093783  521273 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3022582839/001/docker-machine-driver-kvm2
I0917 01:15:05.937937  521273 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 01:15:05.938022  521273 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0917 01:15:05.967344  521273 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0917 01:15:05.967385  521273 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0917 01:15:05.967481  521273 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 01:15:05.967512  521273 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3022582839/002/docker-machine-driver-kvm2
I0917 01:15:05.994494  521273 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3022582839/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc0003c8cc0 gz:0xc0003c8cc8 tar:0xc0003c8c40 tar.bz2:0xc0003c8c50 tar.gz:0xc0003c8c60 tar.xz:0xc0003c8c90 tar.zst:0xc0003c8ca0 tbz2:0xc0003c8c50 tgz:0xc0003c8c60 txz:0xc0003c8c90 tzst:0xc0003c8ca0 xz:0xc0003c8ce0 zip:0xc0003c8cf0 zst:0xc0003c8ce8] Getters:map[file:0xc0017028d0 http:0xc0003b5270 https:0xc0003b52c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 01:15:05.994547  521273 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3022582839/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.66s)

                                                
                                    
x
+
TestErrorSpam/setup (21.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-582600 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-582600 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-582600 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-582600 --driver=docker  --container-runtime=crio: (21.095654037s)
--- PASS: TestErrorSpam/setup (21.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 stop: (2.393170099s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582600 --log_dir /tmp/nospam-582600 stop
--- PASS: TestErrorSpam/stop (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21550-517646/.minikube/files/etc/test/nested/copy/521273/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-836309 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0917 00:10:14.441037  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:14.447630  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:14.459089  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:14.480554  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:14.522088  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:14.603631  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:14.765226  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:15.087148  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:15.729358  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:17.010742  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-836309 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.378104123s)
--- PASS: TestFunctional/serial/StartWithProxy (39.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0917 00:10:18.342840  521273 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-836309 --alsologtostderr -v=8
E0917 00:10:19.572402  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:24.694316  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-836309 --alsologtostderr -v=8: (7.017258887s)
functional_test.go:678: soft start took 7.018044348s for "functional-836309" cluster.
I0917 00:10:25.360487  521273 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (7.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-836309 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 cache add registry.k8s.io/pause:3.1: (1.117767649s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 cache add registry.k8s.io/pause:3.3: (1.173009907s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 cache add registry.k8s.io/pause:latest: (1.082139451s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-836309 /tmp/TestFunctionalserialCacheCmdcacheadd_local1377471633/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cache add minikube-local-cache-test:functional-836309
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cache delete minikube-local-cache-test:functional-836309
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-836309
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.266536ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 kubectl -- --context functional-836309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-836309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-836309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0917 00:10:34.936191  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:10:55.417878  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-836309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.035705997s)
functional_test.go:776: restart took 45.035844624s for "functional-836309" cluster.
I0917 00:11:17.580147  521273 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (45.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-836309 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 logs: (1.581160612s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 logs --file /tmp/TestFunctionalserialLogsFileCmd2974647057/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 logs --file /tmp/TestFunctionalserialLogsFileCmd2974647057/001/logs.txt: (1.509760644s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-836309 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-836309
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-836309: exit status 115 (457.49503ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32499 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-836309 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 config get cpus: exit status 14 (79.47989ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 config get cpus: exit status 14 (65.705856ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.053381ms)

                                                
                                                
-- stdout --
	* [functional-836309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:17:40.557926  582984 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:17:40.558250  582984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.558261  582984 out.go:374] Setting ErrFile to fd 2...
	I0917 00:17:40.558265  582984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.558524  582984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:17:40.559040  582984 out.go:368] Setting JSON to false
	I0917 00:17:40.560151  582984 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10804,"bootTime":1758057457,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:17:40.560284  582984 start.go:140] virtualization: kvm guest
	I0917 00:17:40.562242  582984 out.go:179] * [functional-836309] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:17:40.563772  582984 notify.go:220] Checking for updates...
	I0917 00:17:40.563783  582984 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:17:40.565143  582984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:17:40.566268  582984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:17:40.567613  582984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:17:40.568926  582984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:17:40.570287  582984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:17:40.572174  582984 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:17:40.572724  582984 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:17:40.599467  582984 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:17:40.599623  582984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:17:40.659160  582984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-17 00:17:40.647503425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:17:40.659283  582984 docker.go:318] overlay module found
	I0917 00:17:40.661028  582984 out.go:179] * Using the docker driver based on existing profile
	I0917 00:17:40.662348  582984 start.go:304] selected driver: docker
	I0917 00:17:40.662363  582984 start.go:918] validating driver "docker" against &{Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:17:40.662489  582984 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:17:40.664414  582984 out.go:203] 
	W0917 00:17:40.665694  582984 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 00:17:40.666976  582984 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-836309 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-836309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (168.41833ms)

                                                
                                                
-- stdout --
	* [functional-836309] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:17:40.936845  583199 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:17:40.936953  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.936960  583199 out.go:374] Setting ErrFile to fd 2...
	I0917 00:17:40.936966  583199 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:17:40.937339  583199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:17:40.937877  583199 out.go:368] Setting JSON to false
	I0917 00:17:40.938867  583199 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":10804,"bootTime":1758057457,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:17:40.938993  583199 start.go:140] virtualization: kvm guest
	I0917 00:17:40.941492  583199 out.go:179] * [functional-836309] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0917 00:17:40.944227  583199 notify.go:220] Checking for updates...
	I0917 00:17:40.944335  583199 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:17:40.946765  583199 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:17:40.948295  583199 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 00:17:40.949696  583199 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 00:17:40.951158  583199 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:17:40.952856  583199 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:17:40.955046  583199 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:17:40.955588  583199 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:17:40.980713  583199 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 00:17:40.980830  583199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:17:41.040600  583199 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-09-17 00:17:41.029871976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 00:17:41.040710  583199 docker.go:318] overlay module found
	I0917 00:17:41.043008  583199 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0917 00:17:41.045273  583199 start.go:304] selected driver: docker
	I0917 00:17:41.045298  583199 start.go:918] validating driver "docker" against &{Name:functional-836309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-836309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:17:41.045421  583199 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:17:41.048155  583199 out.go:203] 
	W0917 00:17:41.049889  583199 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 00:17:41.051309  583199 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh -n functional-836309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cp functional-836309:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3564407236/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh -n functional-836309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh -n functional-836309 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/521273/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo cat /etc/test/nested/copy/521273/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/521273.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo cat /etc/ssl/certs/521273.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/521273.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo cat /usr/share/ca-certificates/521273.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5212732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo cat /etc/ssl/certs/5212732.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5212732.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo cat /usr/share/ca-certificates/5212732.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-836309 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh "sudo systemctl is-active docker": exit status 1 (310.33458ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh "sudo systemctl is-active containerd": exit status 1 (309.074602ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-836309 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-836309
localhost/kicbase/echo-server:functional-836309
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-836309 image ls --format short --alsologtostderr:
I0917 00:21:31.365747  586881 out.go:360] Setting OutFile to fd 1 ...
I0917 00:21:31.365878  586881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:31.365889  586881 out.go:374] Setting ErrFile to fd 2...
I0917 00:21:31.365893  586881 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:31.366143  586881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
I0917 00:21:31.366745  586881 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:31.366845  586881 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:31.367260  586881 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
I0917 00:21:31.387875  586881 ssh_runner.go:195] Run: systemctl --version
I0917 00:21:31.387943  586881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
I0917 00:21:31.409373  586881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
I0917 00:21:31.505445  586881 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-836309 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/kicbase/echo-server           │ functional-836309  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-836309  │ b8825d1e43a79 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-836309 image ls --format table --alsologtostderr:
I0917 00:21:33.688704  587768 out.go:360] Setting OutFile to fd 1 ...
I0917 00:21:33.689005  587768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:33.689018  587768 out.go:374] Setting ErrFile to fd 2...
I0917 00:21:33.689023  587768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:33.689277  587768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
I0917 00:21:33.689919  587768 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:33.690018  587768 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:33.690457  587768 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
I0917 00:21:33.711558  587768 ssh_runner.go:195] Run: systemctl --version
I0917 00:21:33.711608  587768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
I0917 00:21:33.734092  587768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
I0917 00:21:33.830147  587768 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-836309 image ls --format json --alsologtostderr:
[{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddef
fadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a
141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"b8825d1e43a79d731263bdf69817c06d99e4f24e0b5ad713fe74ec34d5a9743e","repoDigests":["localhost/minikube-local-cache-test@sha256:1097c6c036ea50c6252a3509d17b8653edb2a4831dc3cc8c55266a11cb3ab3a9"],"repoTags":["localhost/minikube-local-cache-test:functional-836309"],"size":"3330"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c
7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/bus
ybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-836309"],"size":"4943877"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-836309 image ls --format json --alsologtostderr:
I0917 00:21:33.449613  587706 out.go:360] Setting OutFile to fd 1 ...
I0917 00:21:33.449941  587706 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:33.449958  587706 out.go:374] Setting ErrFile to fd 2...
I0917 00:21:33.449962  587706 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:33.450194  587706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
I0917 00:21:33.450904  587706 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:33.451021  587706 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:33.451495  587706 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
I0917 00:21:33.470475  587706 ssh_runner.go:195] Run: systemctl --version
I0917 00:21:33.470533  587706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
I0917 00:21:33.490061  587706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
I0917 00:21:33.586922  587706 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-836309 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-836309
size: "4943877"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: b8825d1e43a79d731263bdf69817c06d99e4f24e0b5ad713fe74ec34d5a9743e
repoDigests:
- localhost/minikube-local-cache-test@sha256:1097c6c036ea50c6252a3509d17b8653edb2a4831dc3cc8c55266a11cb3ab3a9
repoTags:
- localhost/minikube-local-cache-test:functional-836309
size: "3330"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-836309 image ls --format yaml --alsologtostderr:
I0917 00:21:31.603823  587010 out.go:360] Setting OutFile to fd 1 ...
I0917 00:21:31.603956  587010 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:31.603965  587010 out.go:374] Setting ErrFile to fd 2...
I0917 00:21:31.603971  587010 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:31.604192  587010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
I0917 00:21:31.604879  587010 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:31.604987  587010 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:31.605440  587010 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
I0917 00:21:31.626844  587010 ssh_runner.go:195] Run: systemctl --version
I0917 00:21:31.626938  587010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
I0917 00:21:31.646885  587010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
I0917 00:21:31.741148  587010 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh pgrep buildkitd: exit status 1 (282.434404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image build -t localhost/my-image:functional-836309 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 image build -t localhost/my-image:functional-836309 testdata/build --alsologtostderr: (2.244347042s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-836309 image build -t localhost/my-image:functional-836309 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 15bc76afb2a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-836309
--> 74034010a49
Successfully tagged localhost/my-image:functional-836309
74034010a49b792b45dc1f67cbfb4b109222a106ad897bf11a45918407c319ca
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-836309 image build -t localhost/my-image:functional-836309 testdata/build --alsologtostderr:
I0917 00:21:32.122594  587306 out.go:360] Setting OutFile to fd 1 ...
I0917 00:21:32.123494  587306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:32.123512  587306 out.go:374] Setting ErrFile to fd 2...
I0917 00:21:32.123516  587306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:21:32.123732  587306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
I0917 00:21:32.124366  587306 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:32.125056  587306 config.go:182] Loaded profile config "functional-836309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:21:32.125516  587306 cli_runner.go:164] Run: docker container inspect functional-836309 --format={{.State.Status}}
I0917 00:21:32.145327  587306 ssh_runner.go:195] Run: systemctl --version
I0917 00:21:32.145404  587306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-836309
I0917 00:21:32.165362  587306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/functional-836309/id_rsa Username:docker}
I0917 00:21:32.260915  587306 build_images.go:161] Building image from path: /tmp/build.2743786372.tar
I0917 00:21:32.260993  587306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 00:21:32.270881  587306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2743786372.tar
I0917 00:21:32.275114  587306 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2743786372.tar: stat -c "%s %y" /var/lib/minikube/build/build.2743786372.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2743786372.tar': No such file or directory
I0917 00:21:32.275148  587306 ssh_runner.go:362] scp /tmp/build.2743786372.tar --> /var/lib/minikube/build/build.2743786372.tar (3072 bytes)
I0917 00:21:32.305695  587306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2743786372
I0917 00:21:32.316241  587306 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2743786372 -xf /var/lib/minikube/build/build.2743786372.tar
I0917 00:21:32.327069  587306 crio.go:315] Building image: /var/lib/minikube/build/build.2743786372
I0917 00:21:32.327168  587306 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-836309 /var/lib/minikube/build/build.2743786372 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0917 00:21:34.284046  587306 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-836309 /var/lib/minikube/build/build.2743786372 --cgroup-manager=cgroupfs: (1.9568105s)
I0917 00:21:34.284127  587306 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2743786372
I0917 00:21:34.295363  587306 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2743786372.tar
I0917 00:21:34.308814  587306 build_images.go:217] Built localhost/my-image:functional-836309 from /tmp/build.2743786372.tar
I0917 00:21:34.308854  587306 build_images.go:133] succeeded building to: functional-836309
I0917 00:21:34.308861  587306 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-836309
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image load --daemon kicbase/echo-server:functional-836309 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 image load --daemon kicbase/echo-server:functional-836309 --alsologtostderr: (1.265658177s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image load --daemon kicbase/echo-server:functional-836309 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-836309
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image load --daemon kicbase/echo-server:functional-836309 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "362.958931ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.596308ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image save kicbase/echo-server:functional-836309 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "353.444514ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.718719ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image rm kicbase/echo-server:functional-836309 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (33.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdany-port217136504/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758067890026816102" to /tmp/TestFunctionalparallelMountCmdany-port217136504/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758067890026816102" to /tmp/TestFunctionalparallelMountCmdany-port217136504/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758067890026816102" to /tmp/TestFunctionalparallelMountCmdany-port217136504/001/test-1758067890026816102
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.422623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:11:30.325706  521273 retry.go:31] will retry after 312.96629ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 00:11 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 00:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 00:11 test-1758067890026816102
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh cat /mount-9p/test-1758067890026816102
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-836309 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [bfb55029-4450-4f8a-bf25-0fb8e820fc27] Pending
helpers_test.go:352: "busybox-mount" [bfb55029-4450-4f8a-bf25-0fb8e820fc27] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0917 00:11:36.379376  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [bfb55029-4450-4f8a-bf25-0fb8e820fc27] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [bfb55029-4450-4f8a-bf25-0fb8e820fc27] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 31.003808834s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-836309 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdany-port217136504/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (33.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-836309
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 image save --daemon kicbase/echo-server:functional-836309 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-836309
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdspecific-port2099491116/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.754977ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:12:03.947454  521273 retry.go:31] will retry after 398.433049ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdspecific-port2099491116/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh "sudo umount -f /mount-9p": exit status 1 (283.789668ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-836309 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdspecific-port2099491116/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T" /mount1: exit status 1 (337.674363ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:12:05.730283  521273 retry.go:31] will retry after 642.405444ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-836309 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-836309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665606697/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-836309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-836309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-836309 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-836309 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 579935: os: process already finished
helpers_test.go:525: unable to kill pid 579747: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-836309 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-836309 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 service list: (1.696979735s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-836309 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-836309 service list -o json: (1.708232719s)
functional_test.go:1504: Took "1.708355838s" to run "out/minikube-linux-amd64 -p functional-836309 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-836309
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-836309
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-836309
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m55.482622591s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (116.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 kubectl -- rollout status deployment/busybox: (3.093049319s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-dk9cf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-wj4r5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 kubectl -- exec busybox-7b57f96db7-zw5tc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-671025 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (29.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 stop --alsologtostderr -v 5
E0917 00:40:14.443602  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-671025 stop --alsologtostderr -v 5: (29.616827548s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-671025 status --alsologtostderr -v 5: exit status 7 (120.050278ms)

                                                
                                                
-- stdout --
	ha-671025
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-671025-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-671025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:40:31.636368  632473 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:40:31.636665  632473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:40:31.636676  632473 out.go:374] Setting ErrFile to fd 2...
	I0917 00:40:31.636681  632473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:40:31.636940  632473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 00:40:31.637178  632473 out.go:368] Setting JSON to false
	I0917 00:40:31.637201  632473 mustload.go:65] Loading cluster: ha-671025
	I0917 00:40:31.637288  632473 notify.go:220] Checking for updates...
	I0917 00:40:31.638724  632473 config.go:182] Loaded profile config "ha-671025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:40:31.638801  632473 status.go:174] checking status of ha-671025 ...
	I0917 00:40:31.639505  632473 cli_runner.go:164] Run: docker container inspect ha-671025 --format={{.State.Status}}
	I0917 00:40:31.661819  632473 status.go:371] ha-671025 host status = "Stopped" (err=<nil>)
	I0917 00:40:31.661846  632473 status.go:384] host is not running, skipping remaining checks
	I0917 00:40:31.661853  632473 status.go:176] ha-671025 status: &{Name:ha-671025 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:40:31.661882  632473 status.go:174] checking status of ha-671025-m02 ...
	I0917 00:40:31.662193  632473 cli_runner.go:164] Run: docker container inspect ha-671025-m02 --format={{.State.Status}}
	I0917 00:40:31.683427  632473 status.go:371] ha-671025-m02 host status = "Stopped" (err=<nil>)
	I0917 00:40:31.683482  632473 status.go:384] host is not running, skipping remaining checks
	I0917 00:40:31.683498  632473 status.go:176] ha-671025-m02 status: &{Name:ha-671025-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:40:31.683534  632473 status.go:174] checking status of ha-671025-m04 ...
	I0917 00:40:31.683860  632473 cli_runner.go:164] Run: docker container inspect ha-671025-m04 --format={{.State.Status}}
	I0917 00:40:31.702835  632473 status.go:371] ha-671025-m04 host status = "Stopped" (err=<nil>)
	I0917 00:40:31.702879  632473 status.go:384] host is not running, skipping remaining checks
	I0917 00:40:31.702893  632473 status.go:176] ha-671025-m04 status: &{Name:ha-671025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (29.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-329701 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-329701 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.937376045s)
--- PASS: TestJSONOutput/start/Command (38.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-329701 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-329701 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (16.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-329701 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-329701 --output=json --user=testUser: (16.150123549s)
--- PASS: TestJSONOutput/stop/Command (16.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-590154 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-590154 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (69.247564ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f3d584c-ffe5-478e-a115-bcc51f9b4ae0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-590154] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb7bae16-76f9-4b8b-858a-c959873957da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"73271103-edba-404b-ac85-08d19913bc09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c5b25185-596b-437e-81ec-3869336dee1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig"}}
	{"specversion":"1.0","id":"895aee7f-32db-455f-8396-9c6336d17ff3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube"}}
	{"specversion":"1.0","id":"5232c6c4-77f2-4ca8-9779-693989eb0161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ba17f7fa-91fd-4335-a40f-8725858e1ad9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"528054d5-8d8e-45ef-813d-75c50507e917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-590154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-590154
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-475481 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-475481 --network=: (29.250439861s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-475481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-475481
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-475481: (2.153280166s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-706173 --network=bridge
E0917 00:59:57.509832  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-706173 --network=bridge: (22.176846842s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-706173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-706173
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-706173: (1.959356219s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.16s)

                                                
                                    
x
+
TestKicExistingNetwork (24.76s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0917 01:00:10.128773  521273 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0917 01:00:10.147342  521273 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0917 01:00:10.147475  521273 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0917 01:00:10.147523  521273 cli_runner.go:164] Run: docker network inspect existing-network
W0917 01:00:10.164496  521273 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0917 01:00:10.164537  521273 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0917 01:00:10.164566  521273 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0917 01:00:10.164759  521273 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0917 01:00:10.182781  521273 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c0c35d0ccc41 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:82:29:30:69:13:a2} reservation:<nil>}
I0917 01:00:10.183258  521273 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000122e30}
I0917 01:00:10.183299  521273 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0917 01:00:10.183370  521273 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0917 01:00:10.239812  521273 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-834221 --network=existing-network
E0917 01:00:14.438994  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-834221 --network=existing-network: (22.652551361s)
helpers_test.go:175: Cleaning up "existing-network-834221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-834221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-834221: (1.962482363s)
I0917 01:00:34.873453  521273 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.76s)

                                                
                                    
x
+
TestKicCustomSubnet (24.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-262121 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-262121 --subnet=192.168.60.0/24: (22.082805932s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-262121 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-262121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-262121
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-262121: (2.147307356s)
--- PASS: TestKicCustomSubnet (24.25s)

                                                
                                    
x
+
TestKicStaticIP (24.81s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-875848 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-875848 --static-ip=192.168.200.200: (22.534630745s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-875848 ip
helpers_test.go:175: Cleaning up "static-ip-875848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-875848
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-875848: (2.13170261s)
--- PASS: TestKicStaticIP (24.81s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-559206 --driver=docker  --container-runtime=crio
E0917 01:01:25.128509  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-559206 --driver=docker  --container-runtime=crio: (21.479811345s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-574452 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-574452 --driver=docker  --container-runtime=crio: (22.19984991s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-559206
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-574452
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-574452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-574452
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-574452: (2.365668058s)
helpers_test.go:175: Cleaning up "first-559206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-559206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-559206: (2.404167263s)
--- PASS: TestMinikubeProfile (49.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-832435 --memory=3072 --mount-string /tmp/TestMountStartserial1033771879/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-832435 --memory=3072 --mount-string /tmp/TestMountStartserial1033771879/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.676605998s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-832435 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-854288 --memory=3072 --mount-string /tmp/TestMountStartserial1033771879/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-854288 --memory=3072 --mount-string /tmp/TestMountStartserial1033771879/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.22938459s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-854288 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-832435 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-832435 --alsologtostderr -v=5: (1.681310246s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-854288 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-854288
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-854288: (1.192961858s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-854288
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-854288: (6.272804546s)
--- PASS: TestMountStart/serial/RestartStopped (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-854288 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534011 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0917 01:04:28.194605  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534011 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.66276055s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-534011 -- rollout status deployment/busybox: (3.13976384s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-7jkz2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-852r4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-7jkz2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-852r4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-7jkz2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-852r4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-7jkz2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-7jkz2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-852r4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534011 -- exec busybox-7b57f96db7-852r4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-534011 -v=5 --alsologtostderr
E0917 01:05:14.436286  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-534011 -v=5 --alsologtostderr: (53.665579385s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-534011 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp testdata/cp-test.txt multinode-534011:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1587161100/001/cp-test_multinode-534011.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011:/home/docker/cp-test.txt multinode-534011-m02:/home/docker/cp-test_multinode-534011_multinode-534011-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m02 "sudo cat /home/docker/cp-test_multinode-534011_multinode-534011-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011:/home/docker/cp-test.txt multinode-534011-m03:/home/docker/cp-test_multinode-534011_multinode-534011-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m03 "sudo cat /home/docker/cp-test_multinode-534011_multinode-534011-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp testdata/cp-test.txt multinode-534011-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1587161100/001/cp-test_multinode-534011-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011-m02:/home/docker/cp-test.txt multinode-534011:/home/docker/cp-test_multinode-534011-m02_multinode-534011.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011 "sudo cat /home/docker/cp-test_multinode-534011-m02_multinode-534011.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011-m02:/home/docker/cp-test.txt multinode-534011-m03:/home/docker/cp-test_multinode-534011-m02_multinode-534011-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m03 "sudo cat /home/docker/cp-test_multinode-534011-m02_multinode-534011-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp testdata/cp-test.txt multinode-534011-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1587161100/001/cp-test_multinode-534011-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011-m03:/home/docker/cp-test.txt multinode-534011:/home/docker/cp-test_multinode-534011-m03_multinode-534011.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011 "sudo cat /home/docker/cp-test_multinode-534011-m03_multinode-534011.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 cp multinode-534011-m03:/home/docker/cp-test.txt multinode-534011-m02:/home/docker/cp-test_multinode-534011-m03_multinode-534011-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 ssh -n multinode-534011-m02 "sudo cat /home/docker/cp-test_multinode-534011-m03_multinode-534011-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-534011 node stop m03: (1.307544192s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534011 status: exit status 7 (492.57645ms)

                                                
                                                
-- stdout --
	multinode-534011
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-534011-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-534011-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr: exit status 7 (492.114122ms)

                                                
                                                
-- stdout --
	multinode-534011
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-534011-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-534011-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:05:52.782933  697166 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:05:52.783198  697166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:05:52.783206  697166 out.go:374] Setting ErrFile to fd 2...
	I0917 01:05:52.783211  697166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:05:52.783383  697166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:05:52.783583  697166 out.go:368] Setting JSON to false
	I0917 01:05:52.783606  697166 mustload.go:65] Loading cluster: multinode-534011
	I0917 01:05:52.783676  697166 notify.go:220] Checking for updates...
	I0917 01:05:52.784048  697166 config.go:182] Loaded profile config "multinode-534011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:05:52.784079  697166 status.go:174] checking status of multinode-534011 ...
	I0917 01:05:52.784629  697166 cli_runner.go:164] Run: docker container inspect multinode-534011 --format={{.State.Status}}
	I0917 01:05:52.804376  697166 status.go:371] multinode-534011 host status = "Running" (err=<nil>)
	I0917 01:05:52.804421  697166 host.go:66] Checking if "multinode-534011" exists ...
	I0917 01:05:52.804768  697166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-534011
	I0917 01:05:52.824224  697166 host.go:66] Checking if "multinode-534011" exists ...
	I0917 01:05:52.824567  697166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:05:52.824617  697166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-534011
	I0917 01:05:52.843919  697166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/multinode-534011/id_rsa Username:docker}
	I0917 01:05:52.939185  697166 ssh_runner.go:195] Run: systemctl --version
	I0917 01:05:52.944025  697166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:05:52.956338  697166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:05:53.011781  697166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-17 01:05:53.001079873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:05:53.012352  697166 kubeconfig.go:125] found "multinode-534011" server: "https://192.168.67.2:8443"
	I0917 01:05:53.012399  697166 api_server.go:166] Checking apiserver status ...
	I0917 01:05:53.012449  697166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:05:53.024621  697166 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup
	W0917 01:05:53.035245  697166 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 01:05:53.035310  697166 ssh_runner.go:195] Run: ls
	I0917 01:05:53.039405  697166 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 01:05:53.044525  697166 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 01:05:53.044550  697166 status.go:463] multinode-534011 apiserver status = Running (err=<nil>)
	I0917 01:05:53.044560  697166 status.go:176] multinode-534011 status: &{Name:multinode-534011 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 01:05:53.044578  697166 status.go:174] checking status of multinode-534011-m02 ...
	I0917 01:05:53.044820  697166 cli_runner.go:164] Run: docker container inspect multinode-534011-m02 --format={{.State.Status}}
	I0917 01:05:53.063178  697166 status.go:371] multinode-534011-m02 host status = "Running" (err=<nil>)
	I0917 01:05:53.063218  697166 host.go:66] Checking if "multinode-534011-m02" exists ...
	I0917 01:05:53.063528  697166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-534011-m02
	I0917 01:05:53.081062  697166 host.go:66] Checking if "multinode-534011-m02" exists ...
	I0917 01:05:53.081356  697166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:05:53.081440  697166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-534011-m02
	I0917 01:05:53.099625  697166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21550-517646/.minikube/machines/multinode-534011-m02/id_rsa Username:docker}
	I0917 01:05:53.193842  697166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:05:53.205675  697166 status.go:176] multinode-534011-m02 status: &{Name:multinode-534011-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 01:05:53.205715  697166 status.go:174] checking status of multinode-534011-m03 ...
	I0917 01:05:53.205974  697166 cli_runner.go:164] Run: docker container inspect multinode-534011-m03 --format={{.State.Status}}
	I0917 01:05:53.225188  697166 status.go:371] multinode-534011-m03 host status = "Stopped" (err=<nil>)
	I0917 01:05:53.225209  697166 status.go:384] host is not running, skipping remaining checks
	I0917 01:05:53.225216  697166 status.go:176] multinode-534011-m03 status: &{Name:multinode-534011-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-534011 node start m03 -v=5 --alsologtostderr: (6.51789196s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-534011
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-534011
E0917 01:06:25.127894  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-534011: (29.499693461s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534011 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534011 --wait=true -v=5 --alsologtostderr: (45.516198148s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-534011
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-534011 node delete m03: (4.708309604s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-534011 stop: (28.659582627s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534011 status: exit status 7 (96.895518ms)

                                                
                                                
-- stdout --
	multinode-534011
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-534011-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr: exit status 7 (90.772038ms)

                                                
                                                
-- stdout --
	multinode-534011
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-534011-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:07:49.675725  707394 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:07:49.675857  707394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:07:49.675864  707394 out.go:374] Setting ErrFile to fd 2...
	I0917 01:07:49.675870  707394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:07:49.676087  707394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:07:49.676275  707394 out.go:368] Setting JSON to false
	I0917 01:07:49.676299  707394 mustload.go:65] Loading cluster: multinode-534011
	I0917 01:07:49.676454  707394 notify.go:220] Checking for updates...
	I0917 01:07:49.676703  707394 config.go:182] Loaded profile config "multinode-534011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:07:49.676733  707394 status.go:174] checking status of multinode-534011 ...
	I0917 01:07:49.677203  707394 cli_runner.go:164] Run: docker container inspect multinode-534011 --format={{.State.Status}}
	I0917 01:07:49.696965  707394 status.go:371] multinode-534011 host status = "Stopped" (err=<nil>)
	I0917 01:07:49.697001  707394 status.go:384] host is not running, skipping remaining checks
	I0917 01:07:49.697011  707394 status.go:176] multinode-534011 status: &{Name:multinode-534011 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 01:07:49.697059  707394 status.go:174] checking status of multinode-534011-m02 ...
	I0917 01:07:49.697449  707394 cli_runner.go:164] Run: docker container inspect multinode-534011-m02 --format={{.State.Status}}
	I0917 01:07:49.716078  707394 status.go:371] multinode-534011-m02 host status = "Stopped" (err=<nil>)
	I0917 01:07:49.716149  707394 status.go:384] host is not running, skipping remaining checks
	I0917 01:07:49.716164  707394 status.go:176] multinode-534011-m02 status: &{Name:multinode-534011-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534011 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534011 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.343592904s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534011 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-534011
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534011-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-534011-m02 --driver=docker  --container-runtime=crio: exit status 14 (67.562861ms)

                                                
                                                
-- stdout --
	* [multinode-534011-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-534011-m02' is duplicated with machine name 'multinode-534011-m02' in profile 'multinode-534011'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534011-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534011-m03 --driver=docker  --container-runtime=crio: (21.752369583s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-534011
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-534011: exit status 80 (294.277934ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-534011 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-534011-m03 already exists in multinode-534011-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-534011-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-534011-m03: (2.376490663s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.54s)

                                                
                                    
x
+
TestPreload (113.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-541101 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-541101 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (51.094736986s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-541101 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-541101 image pull gcr.io/k8s-minikube/busybox: (2.595545711s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-541101
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-541101: (5.848670226s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-541101 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0917 01:10:14.436476  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-541101 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.63834148s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-541101 image list
helpers_test.go:175: Cleaning up "test-preload-541101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-541101
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-541101: (2.444541487s)
--- PASS: TestPreload (113.86s)

                                                
                                    
x
+
TestScheduledStopUnix (96.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-951424 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-951424 --memory=3072 --driver=docker  --container-runtime=crio: (20.41731343s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-951424 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-951424 -n scheduled-stop-951424
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-951424 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0917 01:11:24.073938  521273 retry.go:31] will retry after 76.118µs: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.075114  521273 retry.go:31] will retry after 207.303µs: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.076272  521273 retry.go:31] will retry after 324.429µs: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.077453  521273 retry.go:31] will retry after 315.879µs: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.078594  521273 retry.go:31] will retry after 745.175µs: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.079726  521273 retry.go:31] will retry after 502.438µs: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.080865  521273 retry.go:31] will retry after 1.494541ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.083128  521273 retry.go:31] will retry after 1.727904ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.085379  521273 retry.go:31] will retry after 1.870107ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.087616  521273 retry.go:31] will retry after 3.963031ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.091875  521273 retry.go:31] will retry after 8.024304ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.100045  521273 retry.go:31] will retry after 4.845263ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.105280  521273 retry.go:31] will retry after 7.811419ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.113523  521273 retry.go:31] will retry after 13.127143ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.127813  521273 retry.go:31] will retry after 27.869175ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
I0917 01:11:24.156275  521273 retry.go:31] will retry after 25.932351ms: open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/scheduled-stop-951424/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-951424 --cancel-scheduled
E0917 01:11:25.128880  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-951424 -n scheduled-stop-951424
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-951424
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-951424 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-951424
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-951424: exit status 7 (71.767764ms)

                                                
                                                
-- stdout --
	scheduled-stop-951424
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-951424 -n scheduled-stop-951424
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-951424 -n scheduled-stop-951424: exit status 7 (70.139006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-951424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-951424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-951424: (4.761612331s)
--- PASS: TestScheduledStopUnix (96.59s)

                                                
                                    
x
+
TestInsufficientStorage (9.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-486340 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-486340 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.0003694s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"50582052-99ec-46f1-b944-89ad3cf407a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-486340] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e213b55f-a9b8-4203-96da-8a7d1e2849d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"f157d1f8-244c-465a-9777-a0d500149e8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d8e4205-9b01-4d0d-9e60-239b3797a0de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig"}}
	{"specversion":"1.0","id":"1b6afdc3-d03c-4ebf-ad51-10688f57dfa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube"}}
	{"specversion":"1.0","id":"746e8b08-6ebe-4db6-ac2c-4a68aec86934","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9322ae19-c3c1-4325-9941-37b6340a8079","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3ccdf29d-e5b6-468f-bf77-6e1b3953908d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"741fad0b-6f06-4da5-b8c0-041834a89bd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5bdfccca-cdb8-4b88-b00d-97358e63f65d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1cb639c2-46c6-4b4f-8334-25680458ae06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b160614a-9273-43fd-b41f-876f05b21895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-486340\" primary control-plane node in \"insufficient-storage-486340\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b23fb3d-695d-4e6a-8d8a-2a4ac0039910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"38f74fd3-9a0d-4ee6-8a5b-687d4a76e0d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e720756d-a611-4941-ab75-512e14280139","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-486340 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-486340 --output=json --layout=cluster: exit status 7 (282.330066ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-486340","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-486340","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:12:47.086616  729417 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-486340" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-486340 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-486340 --output=json --layout=cluster: exit status 7 (278.714337ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-486340","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-486340","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:12:47.366361  729523 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-486340" does not appear in /home/jenkins/minikube-integration/21550-517646/kubeconfig
	E0917 01:12:47.377565  729523 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/insufficient-storage-486340/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-486340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-486340
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-486340: (1.899087422s)
--- PASS: TestInsufficientStorage (9.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3230339354 start -p running-upgrade-916890 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3230339354 start -p running-upgrade-916890 --memory=3072 --vm-driver=docker  --container-runtime=crio: (46.699403578s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-916890 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-916890 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.421853867s)
helpers_test.go:175: Cleaning up "running-upgrade-916890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-916890
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-916890: (3.542495151s)
--- PASS: TestRunningBinaryUpgrade (74.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (77.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3215115960 start -p missing-upgrade-787407 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3215115960 start -p missing-upgrade-787407 --memory=3072 --driver=docker  --container-runtime=crio: (22.915272211s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-787407
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-787407: (12.688697572s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-787407
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-787407 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-787407 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.346148319s)
helpers_test.go:175: Cleaning up "missing-upgrade-787407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-787407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-787407: (4.322741055s)
--- PASS: TestMissingContainerUpgrade (77.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241323 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-241323 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (79.168725ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-241323] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241323 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241323 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.070902707s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-241323 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3844324434 start -p stopped-upgrade-273702 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3844324434 start -p stopped-upgrade-273702 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.137657852s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3844324434 -p stopped-upgrade-273702 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3844324434 -p stopped-upgrade-273702 stop: (2.51492433s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-273702 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-273702 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.18142634s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241323 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241323 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.277306201s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-241323 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-241323 status -o json: exit status 2 (328.512096ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-241323","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-241323
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-241323: (2.800332506s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-273702
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-273702: (1.08176783s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241323 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241323 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (11.463556613s)
--- PASS: TestNoKubernetes/serial/Start (11.46s)

                                                
                                    
x
+
TestPause/serial/Start (43.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-865174 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-865174 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (43.98956698s)
--- PASS: TestPause/serial/Start (43.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-241323 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-241323 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.303233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.845037152s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-241323
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-241323: (1.215591116s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-241323 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-241323 --driver=docker  --container-runtime=crio: (7.230482652s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-241323 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-241323 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.613572ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (10.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-865174 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-865174 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (10.571124317s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (10.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-333616 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-333616 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (165.510289ms)

                                                
                                                
-- stdout --
	* [false-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:14:57.366881  769041 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:14:57.367016  769041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:14:57.367029  769041 out.go:374] Setting ErrFile to fd 2...
	I0917 01:14:57.367035  769041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:14:57.367268  769041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-517646/.minikube/bin
	I0917 01:14:57.367782  769041 out.go:368] Setting JSON to false
	I0917 01:14:57.369030  769041 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":14240,"bootTime":1758057457,"procs":461,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:14:57.369140  769041 start.go:140] virtualization: kvm guest
	I0917 01:14:57.371374  769041 out.go:179] * [false-333616] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:14:57.372941  769041 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:14:57.372983  769041 notify.go:220] Checking for updates...
	I0917 01:14:57.375936  769041 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:14:57.377085  769041 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-517646/kubeconfig
	I0917 01:14:57.378319  769041 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-517646/.minikube
	I0917 01:14:57.379472  769041 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:14:57.380653  769041 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:14:57.382367  769041 config.go:182] Loaded profile config "kubernetes-upgrade-790254": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:14:57.382519  769041 config.go:182] Loaded profile config "missing-upgrade-787407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0917 01:14:57.382648  769041 config.go:182] Loaded profile config "pause-865174": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:14:57.382779  769041 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:14:57.410474  769041 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0917 01:14:57.410567  769041 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:14:57.474692  769041 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-09-17 01:14:57.464108849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.27.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0917 01:14:57.474810  769041 docker.go:318] overlay module found
	I0917 01:14:57.476650  769041 out.go:179] * Using the docker driver based on user configuration
	I0917 01:14:57.477806  769041 start.go:304] selected driver: docker
	I0917 01:14:57.477819  769041 start.go:918] validating driver "docker" against <nil>
	I0917 01:14:57.477829  769041 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:14:57.479663  769041 out.go:203] 
	W0917 01:14:57.480787  769041 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0917 01:14:57.481973  769041 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-333616 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-333616" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-790254
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-787407
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-865174
contexts:
- context:
cluster: kubernetes-upgrade-790254
user: kubernetes-upgrade-790254
name: kubernetes-upgrade-790254
- context:
cluster: missing-upgrade-787407
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-787407
name: missing-upgrade-787407
- context:
cluster: pause-865174
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-865174
name: pause-865174
current-context: pause-865174
kind: Config
users:
- name: kubernetes-upgrade-790254
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kubernetes-upgrade-790254/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kubernetes-upgrade-790254/client.key
- name: missing-upgrade-787407
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/missing-upgrade-787407/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/missing-upgrade-787407/client.key
- name: pause-865174
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/pause-865174/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/pause-865174/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-333616

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-333616"

                                                
                                                
----------------------- debugLogs end: false-333616 [took: 3.305943645s] --------------------------------
helpers_test.go:175: Cleaning up "false-333616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-333616
--- PASS: TestNetworkPlugins/group/false (3.64s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-865174 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-865174 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-865174 --output=json --layout=cluster: exit status 2 (345.034685ms)

                                                
                                                
-- stdout --
	{"Name":"pause-865174","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-865174","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-865174 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-865174 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-865174 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-865174 --alsologtostderr -v=5: (2.812797498s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (17.686423609s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-865174
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-865174: exit status 1 (18.998898ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-865174: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (17.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-963739 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-963739 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.75818455s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (82.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-694277 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0917 01:16:25.128498  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-694277 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m22.974132187s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (82.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-963739 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d560582b-f500-40c3-b850-05b26c5a0074] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d560582b-f500-40c3-b850-05b26c5a0074] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003863185s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-963739 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-963739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-963739 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-963739 --alsologtostderr -v=3
E0917 01:16:37.511883  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-963739 --alsologtostderr -v=3: (16.019964594s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-963739 -n old-k8s-version-963739
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-963739 -n old-k8s-version-963739: exit status 7 (75.818691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-963739 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-963739 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-963739 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (43.762234357s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-963739 -n old-k8s-version-963739
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-694277 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a38bdaa8-141a-45d8-bd7a-be825027a8e0] Pending
helpers_test.go:352: "busybox" [a38bdaa8-141a-45d8-bd7a-be825027a8e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a38bdaa8-141a-45d8-bd7a-be825027a8e0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003873813s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-694277 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-694277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-694277 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-694277 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-694277 --alsologtostderr -v=3: (16.474531474s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-g7hjk" [ea3976dd-e633-44ee-834b-7c623974b310] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003995575s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-694277 -n no-preload-694277
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-694277 -n no-preload-694277: exit status 7 (71.881598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-694277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-694277 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-694277 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (43.511000334s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-694277 -n no-preload-694277
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-g7hjk" [ea3976dd-e633-44ee-834b-7c623974b310] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003617357s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-963739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-963739 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-963739 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-963739 -n old-k8s-version-963739
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-963739 -n old-k8s-version-963739: exit status 2 (334.059986ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-963739 -n old-k8s-version-963739
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-963739 -n old-k8s-version-963739: exit status 2 (338.04232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-963739 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-963739 -n old-k8s-version-963739
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-963739 -n old-k8s-version-963739
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (109.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-748988 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-748988 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m49.779583221s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (109.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5rr64" [42a6ebb2-8985-4835-9732-ed20cfd58f7b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00387816s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5rr64" [42a6ebb2-8985-4835-9732-ed20cfd58f7b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003993346s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-694277 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-694277 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-694277 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-694277 -n no-preload-694277
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-694277 -n no-preload-694277: exit status 2 (325.730537ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-694277 -n no-preload-694277
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-694277 -n no-preload-694277: exit status 2 (354.027504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-694277 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-694277 -n no-preload-694277
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-694277 -n no-preload-694277
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-377743 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-377743 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m8.436542517s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-454552 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-454552 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (30.042414106s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-454552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-454552 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-454552 --alsologtostderr -v=3: (2.410219858s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-454552 -n newest-cni-454552
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-454552 -n newest-cni-454552: exit status 7 (69.65423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-454552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-454552 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-454552 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (14.716303006s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-454552 -n newest-cni-454552
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-454552 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-454552 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-454552 -n newest-cni-454552
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-454552 -n newest-cni-454552: exit status 2 (310.276726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-454552 -n newest-cni-454552
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-454552 -n newest-cni-454552: exit status 2 (306.18096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-454552 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-454552 -n newest-cni-454552
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-454552 -n newest-cni-454552
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.067192731s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-748988 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9c0f6bbc-9e17-429d-b89f-30b1c69d6942] Pending
helpers_test.go:352: "busybox" [9c0f6bbc-9e17-429d-b89f-30b1c69d6942] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9c0f6bbc-9e17-429d-b89f-30b1c69d6942] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004357677s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-748988 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-377743 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d8013114-9bfc-408e-859e-89015c37ee35] Pending
helpers_test.go:352: "busybox" [d8013114-9bfc-408e-859e-89015c37ee35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d8013114-9bfc-408e-859e-89015c37ee35] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003775712s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-377743 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-748988 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-748988 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-748988 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-748988 --alsologtostderr -v=3: (16.478420718s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-377743 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-377743 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-377743 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-377743 --alsologtostderr -v=3: (16.336961889s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748988 -n embed-certs-748988
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748988 -n embed-certs-748988: exit status 7 (76.65945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-748988 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-748988 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0917 01:20:14.436471  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/addons-069011/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-748988 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (52.536098851s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-748988 -n embed-certs-748988
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-333616 "pgrep -a kubelet"
I0917 01:20:18.425206  521273 config.go:182] Loaded profile config "auto-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-333616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pg9lt" [0f102544-4819-429b-a79b-c3bac40d5532] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pg9lt" [0f102544-4819-429b-a79b-c3bac40d5532] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004121461s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-377743 -n default-k8s-diff-port-377743: exit status 7 (95.173737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-377743 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-333616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.497008436s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kvlgq" [86e96c06-08a6-4a3a-8792-a1058831dfd3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004361864s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kvlgq" [86e96c06-08a6-4a3a-8792-a1058831dfd3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004164957s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-748988 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-748988 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-748988 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748988 -n embed-certs-748988
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748988 -n embed-certs-748988: exit status 2 (329.650639ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-748988 -n embed-certs-748988
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-748988 -n embed-certs-748988: exit status 2 (319.239144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-748988 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-748988 -n embed-certs-748988
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-748988 -n embed-certs-748988
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m24.851319914s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m27.506981002s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0917 01:21:25.127853  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/functional-836309/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:27.579816  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:27.586213  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:27.597629  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:27.619160  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:27.660647  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:27.742823  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:27.904367  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:28.226265  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.756234541s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-94mgz" [ee2792ca-a4d3-4c82-9b46-96b3291b4d70] Running
E0917 01:21:28.867980  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:30.149940  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:21:32.712173  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003916074s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-333616 "pgrep -a kubelet"
I0917 01:21:34.770783  521273 config.go:182] Loaded profile config "kindnet-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-333616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bzvpl" [a58ecb0f-e5e2-49b4-8d93-3f86f1699fdd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:21:37.833738  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bzvpl" [a58ecb0f-e5e2-49b4-8d93-3f86f1699fdd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006034138s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-333616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (114.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0917 01:22:08.558287  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.229783  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.236187  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.247605  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.269039  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.310486  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.392629  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.554188  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:16.875938  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:17.517613  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:18.799268  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:21.360793  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:22:26.482542  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m54.945095527s)
--- PASS: TestNetworkPlugins/group/flannel/Start (114.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-333616 "pgrep -a kubelet"
I0917 01:22:32.528359  521273 config.go:182] Loaded profile config "bridge-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-333616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5hpk7" [bd422987-d3b1-485c-8b6f-339a40107dc1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:22:36.724669  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-5hpk7" [bd422987-d3b1-485c-8b6f-339a40107dc1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.007218376s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-333616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-7bx5f" [1d5c162c-0d76-43a8-835e-be71965d3f49] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-7bx5f" [1d5c162c-0d76-43a8-835e-be71965d3f49] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007814554s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-333616 "pgrep -a kubelet"
I0917 01:22:48.116742  521273 config.go:182] Loaded profile config "custom-flannel-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-333616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fqwh7" [eae5ea16-1b38-456c-9e95-9642e1377e44] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fqwh7" [eae5ea16-1b38-456c-9e95-9642e1377e44] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005896712s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-333616 "pgrep -a kubelet"
E0917 01:22:49.520471  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0917 01:22:49.773446  521273 config.go:182] Loaded profile config "calico-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-333616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bfrdj" [e2b39fc6-901f-4dd1-b6c0-8411f07d0b00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bfrdj" [e2b39fc6-901f-4dd1-b6c0-8411f07d0b00] Running
E0917 01:22:57.206082  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003750719s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-333616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-333616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (32.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-333616 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (32.664040691s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (32.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-333616 "pgrep -a kubelet"
I0917 01:23:34.565002  521273 config.go:182] Loaded profile config "enable-default-cni-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-333616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9lzwz" [02aca781-7df1-4970-9a93-c21d903ee959] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9lzwz" [02aca781-7df1-4970-9a93-c21d903ee959] Running
E0917 01:23:38.168238  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/no-preload-694277/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003873428s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-333616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-n8c5l" [3052785c-27db-4891-b0ff-7cb8af373596] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00389285s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-333616 "pgrep -a kubelet"
I0917 01:24:06.181561  521273 config.go:182] Loaded profile config "flannel-333616": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-333616 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4bxtx" [e5360459-4c26-42a3-8abd-0942467b6de9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4bxtx" [e5360459-4c26-42a3-8abd-0942467b6de9] Running
E0917 01:24:11.443005  521273 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/old-k8s-version-963739/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004075582s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-333616 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-333616 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-069011 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-962030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-962030
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-333616 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-333616" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-790254
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-787407
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-865174
contexts:
- context:
cluster: kubernetes-upgrade-790254
user: kubernetes-upgrade-790254
name: kubernetes-upgrade-790254
- context:
cluster: missing-upgrade-787407
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-787407
name: missing-upgrade-787407
- context:
cluster: pause-865174
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-865174
name: pause-865174
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-790254
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kubernetes-upgrade-790254/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kubernetes-upgrade-790254/client.key
- name: missing-upgrade-787407
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/missing-upgrade-787407/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/missing-upgrade-787407/client.key
- name: pause-865174
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/pause-865174/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/pause-865174/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-333616

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-333616"

                                                
                                                
----------------------- debugLogs end: kubenet-333616 [took: 3.248211849s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-333616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-333616
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-333616 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-333616" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-790254
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: missing-upgrade-787407
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-517646/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-865174
contexts:
- context:
cluster: kubernetes-upgrade-790254
user: kubernetes-upgrade-790254
name: kubernetes-upgrade-790254
- context:
cluster: missing-upgrade-787407
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:17 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-787407
name: missing-upgrade-787407
- context:
cluster: pause-865174
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:14:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-865174
name: pause-865174
current-context: pause-865174
kind: Config
users:
- name: kubernetes-upgrade-790254
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kubernetes-upgrade-790254/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/kubernetes-upgrade-790254/client.key
- name: missing-upgrade-787407
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/missing-upgrade-787407/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/missing-upgrade-787407/client.key
- name: pause-865174
user:
client-certificate: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/pause-865174/client.crt
client-key: /home/jenkins/minikube-integration/21550-517646/.minikube/profiles/pause-865174/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-333616

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-333616" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-333616"

                                                
                                                
----------------------- debugLogs end: cilium-333616 [took: 3.667419633s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-333616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-333616
--- SKIP: TestNetworkPlugins/group/cilium (3.84s)

                                                
                                    
Copied to clipboard